A Note on the Central Limit Theorem for a Class of Linear Systems 1

Size: px
Start display at page:

Download "A Note on the Central Limit Theorem for a Class of Linear Systems 1"

Transcription

1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka , Japan. nagahata@sigmath.es.osaka-u.ac.jp URL: nagahata/ Nobuo Yoshida 2 Division of Mathematics Graduate School of Science Kyoto University, Kyoto , Japan. nobuo@math.kyoto-u.ac.jp URL: nobuo/ Abstract We consider a class of continuous-time stochastic growth models on d-dimensional lattice with non-negative real numbers as possible values per site. We remark that the central limit theorem proven in our previous work NY09a can be extended to wider class of models so that it covers the cases of potlatch/smoothing processes. 1 Introduction The model Results The proof of Theorem The equivalence of (a) (c) The equivalence of (c) and (d) The equivalence of (a),(b ),(c ) The equivalence of (c ) and (d ) Introduction We write N = {1, 2,...}, N = {0} N, and Z = {±x ; x N}. For x = (x 1,.., x d ) R d, x stands for the l 1 -norm: x = d i=1 x i. For η = (η x ) x Z d R Zd, η = x Z d η x. Let (Ω, F, P ) be a probability space. We write P X : A = A X dp and P X = P X : Ω for a random variable X and an event A. 1.1 The model We go directly into the formal definition of the model, referring the reader to NY09a, NY09b for relevant backgrounds. The class of growth models considered here is a reasonably ample 1 September 19, 2009file name t/09/lclt2 2 Supported in part by JSPS Grant-in-Aid for Scientific Research, Kiban (C)

2 subclass of the one considered in Lig85, Chapter IX as linear systems. We introduce a random vector K = (K x ) x Z d such that 0 K x b K 1 { x rk } a.s. for some constants b K, r K 0, ), (1.1) the set {x Z d ; P K x 0} contains a linear basis of R d. (1.2) The first condition (1.1) amounts to the standard boundedness and the finite range assumptions for the transition rate of interacting particle systems. The second condition (1.2) makes the model truly d-dimensional. Let τ z,i, (z Z d, i N ) be i.i.d. mean-one exponential random variables and T z,i = τ z, τ z,i. Let also K z,i = (Kx z,i ) x Z d (z Z d, i N ) be i.i.d. random vectors with the same distributions as K, independent of {τ z,i } z Z d,i N. We suppose that the process (η t ) starts from a deterministic configuration η 0 = (η 0,x ) x Z d N Zd with η 0 <. At time t = T z,i, η t is replaced by η t, where { K z,i η t,x = 0 η t,z if x = z, η t,x + K z,i x z η (1.3) t,z if x z. We also consider the dual process ζ t 0, ) Zd, t 0 which evolves in the same way as (η t ) t 0 except that (1.3) is replaced by its transpose: ζ t,x = { y Z d Kz,i y x ζ t,y if x = z, ζ t,x if x z. (1.4) Here are some typical examples which fall into the above set-up: The binary contact path process (BCPP): The binary contact path process (BCPP), originally introduced by D. Griffeath Gri83 is a special case the model, where { λ (δx,0 + δ x,e ) K = x Z d with probability 2dλ+1, for each 2d neighbor e of with probability 2dλ+1. (1.5) The process is interpreted as the spread of an infection, with η t,x infected individuals at time λ t at the site x. The first line of (1.5) says that, with probability 2dλ+1 for each e = 1, all the infected individuals at site x e are duplicated and added to those on the site x. On the other hand, the second line of (1.5) says that, all the infected individuals at a site become 1 healthy with probability 2dλ+1. A motivation to study the BCPP comes from the fact that the projected process (η t,x 1) x Z d, t 0 is the basic contact process Gri83. The potlatch/smoothing processes: The potlatch process discussed in e.g. HL81 and Lig85, Chapter IX is also a special case of the above set-up, in which K x = W k x, x Z d. (1.6) Here, k = (k x ) x Z d 0, ) Zd is a non-random vector and W is a non-negative, bounded, mean-one random variable such that P (W = 1) < 1 (so that the notation k here is consistent with the definition (1.7) below). The smoothing process is the dual process of the potlatch process. The potlatch/smoothing processes were first introduced in Spi81 for the case W 1 and discussed further in LS81. It was in HL81 where case with W 1 was introduced and discussed. Note that we do not assume that k x is a transition probability of an irreducible random walk, unlike in the literatures mentioned above. 2

3 We now recall the following facts from Lig85, page 433, Theorems 2.2 and 2.3. Let F t be the σ-field generated by η s, s t. Let (η x t ) t 0 be the process (η t ) t 0 starting from one particle at the site x: η x 0 = δ x. Similarly, let (ζ x t ) t 0 be the dual process starting from one particle at the site x: ζ x 0 = δ x. Lemma We set Then, k = (k x ) x Z d = (P K x ) x Z d (1.7) η t = (e ( k 1)t η t,x ) x Z d. (1.8) a) ( η t, F t ) t 0 is a martingale, and therefore, the following limit exists a.s. b) Either η = lim t η t. (1.9) P η 0 = 1 or 0. (1.10) Moreover, P η 0 = 1 if and only if the limit (1.9) is convergent in L 1 (P ). c) The above (a) (b), with η t replaced by ζ t are true for the dual process. 1.2 Results We first introduce some more notation. For η, ζ R Zd, the inner product and the discrete convolution are defined respectively by η, ζ = x Z d η x ζ x and (η ζ) x = y Z d η x y ζ y (1.11) provided the summations converge. We define for x, y Z d, β x,y = P (K δ 0 ) x (K δ 0 ) y and β x = y Z d β x+y,y (1.12) If we simply write β in the sequel, it stands for the function x β x. Note then that β, 1 = x,y Z d β x,y = P ( K 1) 2. (1.13) We also introduce: G S (x) = 0 P 0 S(S t = x)dt, (1.14) where ((S t ) t 0, P x S ) is the continuous-time random walk on Zd starting from x Z d, with the generator L S f(x) = y Z d L S (x, y) (f(y) f(x)), with L S (x, y) = k x y + k y x 2 for x y, (1.15) cf. (1.7). The set of bounded continuous functions on R d is denoted by C b (R d ). Theorem Suppose d 3. Then, the following conditions are equivalent: 3

4 a) β, G S < 2. b) There exists a bounded function h : Z d 1, ) such that c) sup P η t 2 <. t 0 d) lim f t x Z d (L S h)(x) δ 0,x β, h 0, x Z d. (1.16) ( (x mt)/ ) t η t,x = η fdν in L 2 (P ) for all f C b (R d ), R d where m = x Z d xk x R d and ν is the Gaussian measure with x i dν(x) = 0, x i x j dν(x) = x i x j k x, i, j = 1,.., d. (1.17) R d R d x Z d b ) There exists a bounded function h : Z d 1, ) such that c ) sup P ζ t 2 <. t 0 d ) lim f t x Z d (L S h)(x) h(0)β x 0, x Z d. (1.18) ( (x mt)/ ) t ζ t,x = ζ fdν in L 2 (P ) for all f C b (R d ). R d The main point of Theorem is that (a) implies (d) and (d ), while the equivalences between the other conditions are byproducts. Remarks: 1) Theorem extends NY09a, Theorem 1.2.1, where the following extra technical condition was imposed: β x = 0 for x 0. (1.19) For example, BCPP satisfies (1.19), while the potlatch/smoothing processes do not. 2) Let π d be the return probability for the simple random walk on Z d. We then have that β, G S < 2 { λ > 1 2d(1 2π d ) for BCPP, P W 2 < (2 k 1)G S(0) G S k,k for the potlatch/smoothing processes. (1.20) cf. Lig85, page 460, (6.5) and page 464, Theorem 6.16 (a). For BCPP, (1.20) can be seen from that (cf. NY09a, page 965) β x,y = 1{x = 0} + λ1{ x = 1} 2dλ + 1 δ x,y, and G S (0) = 2dλ + 1 2dλ 1. 1 π d To see (1.20) for the potlatch/smoothing processes, we note that 1 2 (k + ǩ) G S = k G S δ 0, with ǩx = k x and that Thus, β x,y = P W 2 k x k y k x δ y,0 k y δ x,0 + δ x,0 δ y,0. β, G S = P W 2 G S k, k G S, k + ǩ + G S(0) = P W 2 G S k, k + 2 (2 k 1)G S (0), 4

5 from which (1.20) for the potlatch/smoothing processes follows. 3) It will be seen from the proof that the inequalities in (1.16) and (1.18) can be replaced by the equality, keeping the other statement of Theorem As an immediate consequence of Theorem 1.2.1, we have the following Corollary Suppose either of (a) (d) in Theorem Then, P η = η 0 and for all f C b (R d ), ( lim (x mt)/ ) ηt,x t t η t 1 {η t 0} = fdν R d x Z d f in probability with respect to P ( η t 0, t). where m = x Z d xk x R d and ν is the Gaussian measure defined by (1.17). Similarly, either of (a),(b ),(c ),(d ) in Theorem implies the above statement, with η t replaced by the dual process ζ t. Proof: The case of (η ) follows from Theorem 1.2.1(d). Note also that if P ( η > 0) > 0, then, up to a null set, { η > 0 } = { η t 0, t }, which can be seen by the argument in Gri83, page 701, proof of Proposition. The proof for the case of (ζ ) is the same. 2 The proof of Theorem The equivalence of (a) (c) We first show the Feynman-Kac formula for two-point function, which is the basis of the proof of Theorem To state it, we introduce Markov chains (X, X) and (Y, Ỹ ) which are also exploited in NY09a. Let (X, X) = ((X t, X t ) t 0, P x,ex X, X e) and (Y, Ỹ ) = ((Y t, Ỹt) t 0, P x,ex Y, Y e ) be the continuous-time Markov chains on Z d Z d starting from (x, x), with the generators respectively, where L X, X ef(x, x) = L X, X e(x, x, y, ỹ) (f(y, ỹ) f(x, x)), y,ey Z d and L Y, Y e f(x, x) = L Y, Y e (x, x, y, ỹ) (f(y, ỹ) f(x, x)), y,ey Z d (2.1) and L X, X e(x, x, y, ỹ) = (k δ 0 ) x y δ ex,ey + (k δ 0 ) ex ey δ x,y + β x y,ex y δ y,ey L Y, Y e (x, x, y, ỹ) = L X, X e(y, ỹ, x, x) (2.2) It is useful to note that L X, X e(x, x, y, ỹ) = 2( k 1) + β x ex, (2.3) y,ey L Y, Y e (x, x, y, ỹ) = 2( k 1) + β, 1 δ x,ex. (2.4) y,ey Recall also the notation (η x t ) t 0 and (ζ x t ) t 0 introduced before Lemma

6 Lemma For t 0 and x, x, y, ỹ Z d, P ζ y t,x ζ ey t,ex = P ηx t,yη ex t,ey = e 2( k 1)t P y,ey X, e X = e 2( k 1)t P x,ex Y, e Y e X, X,t e : (X t, X t ) = (x, x) e Y, Y e,t : (Y t, Ỹt) = (y, ỹ), ( ) ( t where e X, X,t e = exp 0 β X s X e s ds and e Y, Y e,t = exp β, 1 ) t 0 δ Y s, Y e s ds. Proof: By the time-reversal argument as in Lig85, Theorem 1.25, we see that (η x t,y, η ex t,ey ) and (ζ y t,x, ζ ey t,ex ) have the same law. This implies the first equality. In NY09a, Lemma 2.1.1, we showed the second equality, using (2.4). Finally, we see from (2.2) (2.4) that the operators: f(x, x) L X, e X f(x, x) + β x ex f(x, x), f(x, x) L Y, e Y f(x, x) + β, 1 δ x,ex f(x, x) are transpose to each other, and hence are the semi-groups generated by the above operators. This proves the last equality of the lemma. Lemma ((X t X t ) t 0, P x,0 X, X e) and ((Y t Ỹt) t 0, P x,0 Y, Y e ) are Markov chains with the generators: L X X ef(x) = 2L S f(x) + β x (f(0) f(x)) (2.5) and L Y Y e f(x) = 2L S f(x) + ( β, f β, 1 f(x))δ x,0, respectively (cf. (1.15)). Moreover, these Markov chains are transient for d 3. Proof: Let (Z, Z) = (X, X) or (Y, Ỹ ). Since (Z, Z) is shift-invariant, in the sense that L Z, Z e(x + v, x + v, y + v, ỹ + v) = L Z, Z e(x, x, y, ỹ) for all v Z d, ((Z t Z t ) t 0, P x,ex Z, Z e) is a Markov chain. Moreover, the jump rates L Z Z e(x, y), x y are computed as follows: L Z e Z (x, y) = z Z d L Z, e Z (x, 0, z + y, z) = { k x y + k y x + δ y,0 β x if (Z, Z) = (X, X), k x y + k y x + δ x,0 β y if (Z, Z) = (Y, Ỹ ). These prove (2.5). By (1.2), the random walk S is transient for d 3. Thus, Z Z is transient d 3, since L Z Z e(x, ) = 2L S (x, ) except for finitely many x. Proof of (a) (b) (c): (a) (b): Under the assumption (a), the function h given below satisfies conditions in (b): h = 1 + cg S with c = In particular it solves (1.16) with equality. (b) (c): By Lemma 2.1.1, we have that 1) P η x t η ex t = P x,ex Y, e Y e Y, ey,t, x, x Z d, β, 1 2 β, G S. (2.6) 6

7 ( where e Y, Y e,t = exp β, 1 ) t 0 δ Y s, Y e s ds. By Lemma 2.1.2, (1.16) reads: and thus, L Y e Y h(x) + β, 1 δ x,0 h(x) 0, x Z d (2.7) P x,0 Y, Y e e Y, Y e,t h(y t Ỹt) h(x), x Z d. Since h takes its values in 1, sup h with sup h <, we have sup P e x,0 x Y, Y e Y, ey,t sup h <. By this and (1), we obtain that sup P η x t η 0 t sup h <. x (c) (a) : Let G Y Y e (x, y) be the Green function of the Markov chain Y Ỹ (cf. Lemma 2.1.2). Then, it follows from (2.5) that G Y e Y (x, y) = 1 2 G S(y x) ( β, G S β, 1 G S (0)) G Y e Y (x, 0). (2.8) On the other hand, we have by (1) that for any x, x Z d, e Y, ey,t = P η x t ηt ex P η 0 t 2, P x,ex Y, e Y where the last inequality comes from Schwarz inequality and the shift-invariance. Thus, P e x,ex Y, Y e Y, ey, sup P η 0 t 2 <. (2.9) t 0 Therefore, we can define h : Z d 1, ) by: h(x) = P x,0 Y, e Y e Y, ey,, (2.10) which solves: h(x) = 1 + G Y Y e (x, 0) β, 1 h(0). For x = 0, it implies that G Y Y e (0, 0) β, 1 < 1. Plugging this into (2.8), we have (a). Remark: The function h defined by (2.10) solves (2.7) with equality, as can be seen by the way it is defined. This proves (c) (b) directly. It is also easy to see from (2.8) that the function h defined by (2.10) and by (2.6) coincide. 2.2 The equivalence of (c) and (d) To proceed from (c) to the central limit theorem (d), we will use the following variant of NY09a, Lemma 2.2.2: 7

8 Lemma Let ((Z t ) t 0, P x ) be a continuous-time random walk on Z d starting from x, with the generator L Z f(x) = y Z d L Z (x, y)(f(y) f(x)), where we assume that x Z d x 2 L Z (0, x) <. On the other hand, let Z = (( Z t ) t 0, P x ) be the continuous-time Markov chain on Z d starting from x, with the generator L ez f(x) = y Z d L ez (x, y)(f(y) f(x)). We assume that z Z d, D Z d and a function v : Z d R satisfy L Z (x, y) = L ez (x, y) if x D {y}, D is transient for both Z and Z, v is bounded and v 0 outside D, ( def t e t = exp 0 v( Z ) u )du, t 0 are uniformly integrable with respect to P z. Then, for f C b (R d ), lim P z e t f(( Z t mt)/ t) = P z e fdν, t R d where m = x Z d xl Z(0, x) and ν is the Gaussian measure with x i dν(x) = 0, x i x j dν(x) = x i x j L Z (0, x), i, j = 1,.., d. R d R d x Z d Proof: We refer the reader to the proof of NY09a, Lemma 2.2.2, which works almost verbatim here. The uniform integrability of e t is used to make sure that lim s sup t 0 ε s,t = 0, where ε s,t is an error term introduced in the proof of NY09a, Lemma Proof of (c) (d): (c) (d): Once (2.9) is obtained, we can conclude (d) exactly in the same way as in the corresponding part of NY09a, Theorem Since (c) implies that lim t η t = η in L 2 (P ), it is enough to prove that U t def. = x Z d η t,x f ( (x mt)/ t ) 0 in L 2 (P ) as t for f C b (R d ) such that R fdν = 0. We set f d t (x, x) = f((x m)/ t)f(( x m)/ t). By Lemma 2.1.1, P Ut 2 = P η t,x η t,ex f t (x, x) = η 0,x η 0,ex P x,ex Y, e e Y Y, Y e,t f t (Y t, Ỹt). x,ex Z d x,ex Z d Note that by (2.9) and (c), 1) P e x,ex Y, Y e Y, ey, <. 8

9 Since η 0 <, it is enough to prove that for each x, x Z d lim P x,ex t Y, Y e e Y, Y e,t f t (Y t, Ỹt) = 0. To prove this, we apply Lemma to the Markov chain Z def. t = (Y t, Ỹt) and the random walk (Z t ) on Z d Z d with the generator L Z f(x, x) = L Z (x, x, y, ỹ) (f(y, ỹ) f(x, x)), y,ey Z d where k ey ex if x = y and x ỹ, L Z (x, x, y, ỹ) = k y x if x y and x = ỹ, 0 if otherwise. Let D = {(x, x) Z d Z d ; x = x}. Then, 2) L Z (x, x, y, ỹ) = L Y, e Y (x, x, y, ỹ) if (x, x) D {(y, ỹ)}. Moreover, by Lemma 2.1.2, 3) D is transient both for (Z t ) and for ( Z t ). Finally, the Gaussian measure ν ν is the limit law in the central limit theorem for the random walk (Z t ). Therefore, by (1) (3) and Lemma 2.2.1, ( 2 lim P x,ex t Y, Y e e Y, Y e,t f t (Y t, Ỹt) = P x,ex Y, Y e e Y, Y e, fdν) = 0. R d (d) (c):this can be seen by taking f The equivalence of (a),(b ),(c ) (a) (b ): Let h = 2 β, G S + β G S. Then, it is easy to see that h solves (1.18) with equality. Moreover, using Lemma below, we see that h(x) > 0 for x 0 by as follows: ( ) GS (x) (β G S )(x) (β G S )(0) G S (0) 1 (β G S )(0) 2 G S(x) G S (0) ( ) GS (x) > G S (0) G S(x) G S (0) = 2. Since h(0) = 2 and lim x h(x) = 2 (β G S )(0) (0, ), h is bounded away from both 0 and. Therefore, a constant multiple of the above h satisfies the conditions in (b ). (b ) (c ): This can be seen similarly as (b) (c) (cf. the remark at the end of section 2.1). (c ) (a) : We first note that 1) lim x (β G S)(x) = 0, since G S vanishes at infinity and β is of finite support. We then set: h 0 (x) = P x,0 X, X e X, X,, h 2(x) = h 0 (x) 1 2 h 0(0)(β G S )(x). 9

10 Then, there exists positive constant M such that 1 M h 0 M and By (1), h 2 is also bounded and (L S h 0 )(x) = 1 2 h 0(0)β x, for all x Z d. (L S h 2 )(x) = (L S h 0 )(x) 1 2 h 0(0)L S (β G S )(x) = 1 2 h 0(0)β x h 0(0)β x = 0. This implies that there exists a constant c such that h 2 c on the subgroup H of Z d generated by the set {x Z d ; k x + k x > 0}, i.e., 2) h 0 (x) 1 2 h 0(0)(β G S )(x) = c for x H. By setting x = 0 in (2), we have On the other hand, we see from (1) (2) that c = h 0 (0)(1 β, G S ). 2 0 < 1 M lim h 0 (x) = c. x x H These imply β, G S < 2. Lemma For d 3, (β G S )(x) G S(x) G S (0) ((β G S)(0) 2) + 2δ 0,x x Z d. Proof: The function β x can be either positive or negative. To control this inconvenience, we introduce: β x = y Z d P K yk x+y. Since β x 0 and G S (x + y)g S (0) G S (x)g S (y) for all x, y Z d, we have 1) G S (0)(G S β)(x) G S (x)(g S β)(0). On the other hand, it is easy to see that Therefore, using 1 2 (k + ǩ) G S = k G S δ 0, β = β k ǩ + δ 0, with ǩx = k x. 2) β G S = ( β k ǩ + δ 0) G S = β G S (2 k 1)G S + 2δ 0. Now, by (1), and (2) for x = 0, (G S β)(x) G S(x) G S (0) (G S β)(0) = G S(x) G S (0) (β G S(0) 2) + (2 k 1)G S (x). Plugging this in (2), we get the desired inequality. 10

11 2.4 The equivalence of (c ) and (d ) (d ) (c ): This can be seen by taking f 1. (c ) (d ):By Lemma 2.1.1, Schwarz inequality and the shift-invariance, we have that e X, = P ζ x ex,t t ζ ez t P ζ 0 t 2, for x, x Z d, P x,ex X, e X ( ) t where e X, X,t e = exp 0 β X s X e s ds. Thus, under (c ), the following function is well-defined: h 0 (x) def = P x,0 X, X e X, X,. Moreover, there exists M (0, ) such that 1 M h 0 M and We set Then, we have 0 < 1 2M h 0 M and (L S h 0 )(x) = 1 2 h 0(0)β x, for all x Z d. h 1 (x) = h 0 (x) 1 2M. L S h 1 (x) = L S h 0 (x) = 1 2 h 0(0)β x = 1 2 h 1(0)pβ x, with p = h 0(0) h 1 (0) > 1. This implies, as in the proof of (b) (c) that sup P x,ex t 0 X, X e e p X, X,t e 2M 2 < for x, x Z d, which guarantees the uniform integrability of e X, X,t e, t 0 required to apply Lemma The rest of the proof is the same as in (c) (d). References Gri83 Griffeath, D.: The Binary Contact Path Process, Ann. Probab. Volume 11, Number 3 (1983), HL81 Holley, R., Liggett, T. M. : Generalized potlatch and smoothing processes, Z. Wahrsch. Verw. Gebiete 55, , (1981). Lig85 Liggett, T. M. : Interacting Particle Systems, Springer Verlag, Berlin-Heidelberg-Tokyo (1985). LS81 Liggett, T. M., Spitzer, F. : Ergodic theorems for coupled random walks and other systems with locally interacting components. Z. Wahrsch. Verw. Gebiete 56, , (1981). NY09a Nagahata, Y., Yoshida, N.: Central Limit Theorem for a Class of Linear Systems, Electron. J. Probab. Vol. 14, No. 34, (2009). NY09b Nagahata, Y., Yoshida, N.: Localization for a Class of Linear Systems, preprint, (2009). Spi81 Spitzer, F. : Infinite systems with locally interacting components. Ann. Probab. 9, (1981),

arxiv: v2 [math.pr] 26 Jun 2009

arxiv: v2 [math.pr] 26 Jun 2009 Central Limit Theorem for a Class of Linear Systems 1 arxiv:85.34v [math.pr] 6 Jun 9 Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 56-8531,

More information

arxiv: v1 [math.pr] 24 Jul 2009

arxiv: v1 [math.pr] 24 Jul 2009 Localization for a Class of Linear Systems arxiv:97.42v [math.pr] 24 Jul 29 Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 56-853, Japan. email:

More information

Random Walks Conditioned to Stay Positive

Random Walks Conditioned to Stay Positive 1 Random Walks Conditioned to Stay Positive Bob Keener Let S n be a random walk formed by summing i.i.d. integer valued random variables X i, i 1: S n = X 1 + + X n. If the drift EX i is negative, then

More information

Random Walk in Periodic Environment

Random Walk in Periodic Environment Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics

More information

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 MS&E 321 Spring 12-13 Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10 Section 3: Regenerative Processes Contents 3.1 Regeneration: The Basic Idea............................... 1 3.2

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Uniformity of hitting times of the contact process

Uniformity of hitting times of the contact process ALEA, Lat. Am. J. Probab. Math. Stat. 15, 233 245 2018) DOI: 10.30757/ALEA.v15-11 Uniformity of hitting times of the contact process Markus Heydenreich, Christian Hirsch and Daniel Valesin Mathematisches

More information

Large time behavior of reaction-diffusion equations with Bessel generators

Large time behavior of reaction-diffusion equations with Bessel generators Large time behavior of reaction-diffusion equations with Bessel generators José Alfredo López-Mimbela Nicolas Privault Abstract We investigate explosion in finite time of one-dimensional semilinear equations

More information

Citation Osaka Journal of Mathematics. 41(4)

Citation Osaka Journal of Mathematics. 41(4) TitleA non quasi-invariance of the Brown Authors Sadasue, Gaku Citation Osaka Journal of Mathematics. 414 Issue 4-1 Date Text Version publisher URL http://hdl.handle.net/1194/1174 DOI Rights Osaka University

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM J. Appl. Prob. 49, 876 882 (2012 Printed in England Applied Probability Trust 2012 A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM BRIAN FRALIX and COLIN GALLAGHER, Clemson University Abstract

More information

RATIO ERGODIC THEOREMS: FROM HOPF TO BIRKHOFF AND KINGMAN

RATIO ERGODIC THEOREMS: FROM HOPF TO BIRKHOFF AND KINGMAN RTIO ERGODIC THEOREMS: FROM HOPF TO BIRKHOFF ND KINGMN HNS HENRIK RUGH ND DMIEN THOMINE bstract. Hopf s ratio ergodic theorem has an inherent symmetry which we exploit to provide a simplification of standard

More information

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30) Stochastic Process (NPC) Monday, 22nd of January 208 (2h30) Vocabulary (english/français) : distribution distribution, loi ; positive strictement positif ; 0,) 0,. We write N Z,+ and N N {0}. We use the

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

ASYMPTOTIC BEHAVIOR OF A TAGGED PARTICLE IN SIMPLE EXCLUSION PROCESSES

ASYMPTOTIC BEHAVIOR OF A TAGGED PARTICLE IN SIMPLE EXCLUSION PROCESSES ASYMPTOTIC BEHAVIOR OF A TAGGED PARTICLE IN SIMPLE EXCLUSION PROCESSES C. LANDIM, S. OLLA, S. R. S. VARADHAN Abstract. We review in this article central limit theorems for a tagged particle in the simple

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

On large deviations of sums of independent random variables

On large deviations of sums of independent random variables On large deviations of sums of independent random variables Zhishui Hu 12, Valentin V. Petrov 23 and John Robinson 2 1 Department of Statistics and Finance, University of Science and Technology of China,

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation

More information

Kesten s power law for difference equations with coefficients induced by chains of infinite order

Kesten s power law for difference equations with coefficients induced by chains of infinite order Kesten s power law for difference equations with coefficients induced by chains of infinite order Arka P. Ghosh 1,2, Diana Hay 1, Vivek Hirpara 1,3, Reza Rastegar 1, Alexander Roitershtein 1,, Ashley Schulteis

More information

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts White Gaussian Noise I Definition: A (real-valued) random process X t is called white Gaussian Noise if I X t is Gaussian for each time instance t I Mean: m X (t) =0 for all t I Autocorrelation function:

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

for all f satisfying E[ f(x) ] <.

for all f satisfying E[ f(x) ] <. . Let (Ω, F, P ) be a probability space and D be a sub-σ-algebra of F. An (H, H)-valued random variable X is independent of D if and only if P ({X Γ} D) = P {X Γ}P (D) for all Γ H and D D. Prove that if

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

Stochastic Shear Thickening Fluids: Strong Convergence of the Galerkin Approximation and the Energy Equality 1

Stochastic Shear Thickening Fluids: Strong Convergence of the Galerkin Approximation and the Energy Equality 1 Stochastic Shear Thickening Fluids: Strong Convergence of the Galerkin Approximation and the Energy Equality Nobuo Yoshida Contents The stochastic power law fluids. Terminology from hydrodynamics....................................

More information

SMSTC (2007/08) Probability.

SMSTC (2007/08) Probability. SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

Nonparametric inference for ergodic, stationary time series.

Nonparametric inference for ergodic, stationary time series. G. Morvai, S. Yakowitz, and L. Györfi: Nonparametric inference for ergodic, stationary time series. Ann. Statist. 24 (1996), no. 1, 370 379. Abstract The setting is a stationary, ergodic time series. The

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Nonparametric regression with martingale increment errors

Nonparametric regression with martingale increment errors S. Gaïffas (LSTA - Paris 6) joint work with S. Delattre (LPMA - Paris 7) work in progress Motivations Some facts: Theoretical study of statistical algorithms requires stationary and ergodicity. Concentration

More information

Exponential stability of families of linear delay systems

Exponential stability of families of linear delay systems Exponential stability of families of linear delay systems F. Wirth Zentrum für Technomathematik Universität Bremen 28334 Bremen, Germany fabian@math.uni-bremen.de Keywords: Abstract Stability, delay systems,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Zdzis law Brzeźniak and Tomasz Zastawniak

Zdzis law Brzeźniak and Tomasz Zastawniak Basic Stochastic Processes by Zdzis law Brzeźniak and Tomasz Zastawniak Springer-Verlag, London 1999 Corrections in the 2nd printing Version: 21 May 2005 Page and line numbers refer to the 2nd printing

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Lecture 1 Measure concentration

Lecture 1 Measure concentration CSE 29: Learning Theory Fall 2006 Lecture Measure concentration Lecturer: Sanjoy Dasgupta Scribe: Nakul Verma, Aaron Arvey, and Paul Ruvolo. Concentration of measure: examples We start with some examples

More information

Consistency of the maximum likelihood estimator for general hidden Markov models

Consistency of the maximum likelihood estimator for general hidden Markov models Consistency of the maximum likelihood estimator for general hidden Markov models Jimmy Olsson Centre for Mathematical Sciences Lund University Nordstat 2012 Umeå, Sweden Collaborators Hidden Markov models

More information

On the submartingale / supermartingale property of diffusions in natural scale

On the submartingale / supermartingale property of diffusions in natural scale On the submartingale / supermartingale property of diffusions in natural scale Alexander Gushchin Mikhail Urusov Mihail Zervos November 13, 214 Abstract Kotani 5 has characterised the martingale property

More information

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012 Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 202 The exam lasts from 9:00am until 2:00pm, with a walking break every hour. Your goal on this exam should be to demonstrate mastery of

More information

A generalization of Strassen s functional LIL

A generalization of Strassen s functional LIL A generalization of Strassen s functional LIL Uwe Einmahl Departement Wiskunde Vrije Universiteit Brussel Pleinlaan 2 B-1050 Brussel, Belgium E-mail: ueinmahl@vub.ac.be Abstract Let X 1, X 2,... be a sequence

More information

The Continuity of SDE With Respect to Initial Value in the Total Variation

The Continuity of SDE With Respect to Initial Value in the Total Variation Ξ44fflΞ5» ο ffi fi $ Vol.44, No.5 2015 9" ADVANCES IN MATHEMATICS(CHINA) Sep., 2015 doi: 10.11845/sxjz.2014024b The Continuity of SDE With Respect to Initial Value in the Total Variation PENG Xuhui (1.

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

1 Delayed Renewal Processes: Exploiting Laplace Transforms

1 Delayed Renewal Processes: Exploiting Laplace Transforms IEOR 6711: Stochastic Models I Professor Whitt, Tuesday, October 22, 213 Renewal Theory: Proof of Blackwell s theorem 1 Delayed Renewal Processes: Exploiting Laplace Transforms The proof of Blackwell s

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

MARKOV CHAINS AND HIDDEN MARKOV MODELS

MARKOV CHAINS AND HIDDEN MARKOV MODELS MARKOV CHAINS AND HIDDEN MARKOV MODELS MERYL SEAH Abstract. This is an expository paper outlining the basics of Markov chains. We start the paper by explaining what a finite Markov chain is. Then we describe

More information

General Mathematics Vol. 16, No. 1 (2008), A. P. Madrid, C. C. Peña

General Mathematics Vol. 16, No. 1 (2008), A. P. Madrid, C. C. Peña General Mathematics Vol. 16, No. 1 (2008), 41-50 On X - Hadamard and B- derivations 1 A. P. Madrid, C. C. Peña Abstract Let F be an infinite dimensional complex Banach space endowed with a bounded shrinking

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

On the number of ways of writing t as a product of factorials

On the number of ways of writing t as a product of factorials On the number of ways of writing t as a product of factorials Daniel M. Kane December 3, 005 Abstract Let N 0 denote the set of non-negative integers. In this paper we prove that lim sup n, m N 0 : n!m!

More information

Stochastic Processes (Week 6)

Stochastic Processes (Week 6) Stochastic Processes (Week 6) October 30th, 2014 1 Discrete-time Finite Markov Chains 2 Countable Markov Chains 3 Continuous-Time Markov Chains 3.1 Poisson Process 3.2 Finite State Space 3.2.1 Kolmogrov

More information

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary

More information

The range of tree-indexed random walk

The range of tree-indexed random walk The range of tree-indexed random walk Jean-François Le Gall, Shen Lin Institut universitaire de France et Université Paris-Sud Orsay Erdös Centennial Conference July 2013 Jean-François Le Gall (Université

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

Chapter 2 Smooth Spaces

Chapter 2 Smooth Spaces Chapter Smooth Spaces.1 Introduction In this chapter, we introduce the class of smooth spaces. We remark immediately that there is a duality relationship between uniform smoothness and uniform convexity.

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

STAT 331. Martingale Central Limit Theorem and Related Results

STAT 331. Martingale Central Limit Theorem and Related Results STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal

More information

Faithful couplings of Markov chains: now equals forever

Faithful couplings of Markov chains: now equals forever Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS Josef Teichmann Abstract. Some results of ergodic theory are generalized in the setting of Banach lattices, namely Hopf s maximal ergodic inequality and the

More information

arxiv: v1 [math.ap] 17 May 2017

arxiv: v1 [math.ap] 17 May 2017 ON A HOMOGENIZATION PROBLEM arxiv:1705.06174v1 [math.ap] 17 May 2017 J. BOURGAIN Abstract. We study a homogenization question for a stochastic divergence type operator. 1. Introduction and Statement Let

More information

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings Krzysztof Burdzy University of Washington 1 Review See B and Kendall (2000) for more details. See also the unpublished

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT A MODEL FOR HE LONG-ERM OPIMAL CAPACIY LEVEL OF AN INVESMEN PROJEC ARNE LØKKA AND MIHAIL ZERVOS Abstract. We consider an investment project that produces a single commodity. he project s operation yields

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France)

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France) Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators ( Wroclaw 006) P.Maheux (Orléans. France) joint work with A.Bendikov. European Network (HARP) (to appear in T.A.M.S)

More information

Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice

Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice Yasunari Higuchi 1, Xian-Yuan Wu 2, 1 Department of Mathematics, Faculty of Science, Kobe University,

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Self-normalized laws of the iterated logarithm

Self-normalized laws of the iterated logarithm Journal of Statistical and Econometric Methods, vol.3, no.3, 2014, 145-151 ISSN: 1792-6602 print), 1792-6939 online) Scienpress Ltd, 2014 Self-normalized laws of the iterated logarithm Igor Zhdanov 1 Abstract

More information

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

On Finding Optimal Policies for Markovian Decision Processes Using Simulation On Finding Optimal Policies for Markovian Decision Processes Using Simulation Apostolos N. Burnetas Case Western Reserve University Michael N. Katehakis Rutgers University February 1995 Abstract A simulation

More information

LMI Methods in Optimal and Robust Control

LMI Methods in Optimal and Robust Control LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 15: Nonlinear Systems and Lyapunov Functions Overview Our next goal is to extend LMI s and optimization to nonlinear

More information

Conditional Full Support for Gaussian Processes with Stationary Increments

Conditional Full Support for Gaussian Processes with Stationary Increments Conditional Full Support for Gaussian Processes with Stationary Increments Tommi Sottinen University of Vaasa Kyiv, September 9, 2010 International Conference Modern Stochastics: Theory and Applications

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write Lecture 3: Expected Value 1.) Definitions. If X 0 is a random variable on (Ω, F, P), then we define its expected value to be EX = XdP. Notice that this quantity may be. For general X, we say that EX exists

More information

An approach to the Biharmonic pseudo process by a Random walk

An approach to the Biharmonic pseudo process by a Random walk J. Math. Kyoto Univ. (JMKYAZ) - (), An approach to the Biharmonic pseudo process by a Random walk By Sadao Sato. Introduction We consider the partial differential equation u t 4 u 8 x 4 (.) Here 4 x 4

More information

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series Publ. Math. Debrecen 73/1-2 2008), 1 10 A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series By SOO HAK SUNG Taejon), TIEN-CHUNG

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Insert your Booktitle, Subtitle, Edition

Insert your Booktitle, Subtitle, Edition C. Landim Insert your Booktitle, Subtitle, Edition SPIN Springer s internal project number, if known Monograph October 23, 2018 Springer Page: 1 job: book macro: svmono.cls date/time: 23-Oct-2018/15:27

More information

OBSERVATION OPERATORS FOR EVOLUTIONARY INTEGRAL EQUATIONS

OBSERVATION OPERATORS FOR EVOLUTIONARY INTEGRAL EQUATIONS OBSERVATION OPERATORS FOR EVOLUTIONARY INTEGRAL EQUATIONS MICHAEL JUNG Abstract. We analyze admissibility and exactness of observation operators arising in control theory for Volterra integral equations.

More information

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand Carl Mueller 1 and Zhixin Wu Abstract We give a new representation of fractional

More information

Lecture 21 Representations of Martingales

Lecture 21 Representations of Martingales Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let

More information

arxiv:math/ v2 [math.pr] 16 Mar 2007

arxiv:math/ v2 [math.pr] 16 Mar 2007 CHARACTERIZATION OF LIL BEHAVIOR IN BANACH SPACE UWE EINMAHL a, and DELI LI b, a Departement Wiskunde, Vrije Universiteit Brussel, arxiv:math/0608687v2 [math.pr] 16 Mar 2007 Pleinlaan 2, B-1050 Brussel,

More information

Harmonic Functions and Brownian motion

Harmonic Functions and Brownian motion Harmonic Functions and Brownian motion Steven P. Lalley April 25, 211 1 Dynkin s Formula Denote by W t = (W 1 t, W 2 t,..., W d t ) a standard d dimensional Wiener process on (Ω, F, P ), and let F = (F

More information

LIMIT THEOREMS FOR SHIFT SELFSIMILAR ADDITIVE RANDOM SEQUENCES

LIMIT THEOREMS FOR SHIFT SELFSIMILAR ADDITIVE RANDOM SEQUENCES Watanabe, T. Osaka J. Math. 39 22, 561 63 LIMIT THEOEMS FO SHIFT SELFSIMILA ADDITIVE ANDOM SEQUENCES TOSHIO WATANABE eceived November 15, 2 1. Introduction We introduce shift selfsimilar random sequences,

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Pathwise uniqueness for stochastic differential equations driven by pure jump processes

Pathwise uniqueness for stochastic differential equations driven by pure jump processes Pathwise uniqueness for stochastic differential equations driven by pure jump processes arxiv:73.995v [math.pr] 9 Mar 7 Jiayu Zheng and Jie Xiong Abstract Based on the weak existence and weak uniqueness,

More information

Stochastic and online algorithms

Stochastic and online algorithms Stochastic and online algorithms stochastic gradient method online optimization and dual averaging method minimizing finite average Stochastic and online optimization 6 1 Stochastic optimization problem

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales Lecture 6 Classification of states We have shown that all states of an irreducible countable state Markov chain must of the same tye. This gives rise to the following classification. Definition. [Classification

More information

Riemannian geometry of surfaces

Riemannian geometry of surfaces Riemannian geometry of surfaces In this note, we will learn how to make sense of the concepts of differential geometry on a surface M, which is not necessarily situated in R 3. This intrinsic approach

More information

An invariance result for Hammersley s process with sources and sinks

An invariance result for Hammersley s process with sources and sinks An invariance result for Hammersley s process with sources and sinks Piet Groeneboom Delft University of Technology, Vrije Universiteit, Amsterdam, and University of Washington, Seattle March 31, 26 Abstract

More information

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS Giorgio Picci with T. Taylor, ASU Tempe AZ. Wofgang Runggaldier s Birthday, Brixen July 2007 1 CONSENSUS FOR RANDOM GOSSIP ALGORITHMS Consider a finite

More information

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1) Title On the stability of contact Navier-Stokes equations with discont free b Authors Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 4 Issue 4-3 Date Text Version publisher URL

More information

The Wiener Itô Chaos Expansion

The Wiener Itô Chaos Expansion 1 The Wiener Itô Chaos Expansion The celebrated Wiener Itô chaos expansion is fundamental in stochastic analysis. In particular, it plays a crucial role in the Malliavin calculus as it is presented in

More information

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES Galton-Watson processes were introduced by Francis Galton in 1889 as a simple mathematical model for the propagation of family names. They were reinvented

More information