A Note on the Central Limit Theorem for a Class of Linear Systems 1

Similar documents
arxiv: v2 [math.pr] 26 Jun 2009

arxiv: v1 [math.pr] 24 Jul 2009

Random Walks Conditioned to Stay Positive

Random Walk in Periodic Environment

MS&E 321 Spring Stochastic Systems June 1, 2013 Prof. Peter W. Glynn Page 1 of 10

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

Uniformity of hitting times of the contact process

Large time behavior of reaction-diffusion equations with Bessel generators

Citation Osaka Journal of Mathematics. 41(4)

Random Bernstein-Markov factors

A NEW PROOF OF THE WIENER HOPF FACTORIZATION VIA BASU S THEOREM

RATIO ERGODIC THEOREMS: FROM HOPF TO BIRKHOFF AND KINGMAN

Stochastic Process (ENPC) Monday, 22nd of January 2018 (2h30)

An almost sure invariance principle for additive functionals of Markov chains

The Codimension of the Zeros of a Stable Process in Random Scenery

ASYMPTOTIC BEHAVIOR OF A TAGGED PARTICLE IN SIMPLE EXCLUSION PROCESSES

arxiv: v2 [math.pr] 27 Oct 2015

On large deviations of sums of independent random variables

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Logarithmic scaling of planar random walk s local times

Laplace s Equation. Chapter Mean Value Formulas

I forgot to mention last time: in the Ito formula for two standard processes, putting

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit

Kesten s power law for difference equations with coefficients induced by chains of infinite order

Gaussian Basics Random Processes Filtering of Random Processes Signal Space Concepts

Kernel Method: Data Analysis with Positive Definite Kernels

for all f satisfying E[ f(x) ] <.

Two viewpoints on measure valued processes

Stochastic Shear Thickening Fluids: Strong Convergence of the Galerkin Approximation and the Energy Equality 1

SMSTC (2007/08) Probability.

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Nonparametric inference for ergodic, stationary time series.

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Nonparametric regression with martingale increment errors

Exponential stability of families of linear delay systems

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Zdzis law Brzeźniak and Tomasz Zastawniak

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Lecture 1 Measure concentration

Consistency of the maximum likelihood estimator for general hidden Markov models

On the submartingale / supermartingale property of diffusions in natural scale

Preliminary Exam: Probability 9:00am 2:00pm, Friday, January 6, 2012

A generalization of Strassen s functional LIL

The Continuity of SDE With Respect to Initial Value in the Total Variation

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

1 Delayed Renewal Processes: Exploiting Laplace Transforms

Introduction to self-similar growth-fragmentations

MARKOV CHAINS AND HIDDEN MARKOV MODELS

General Mathematics Vol. 16, No. 1 (2008), A. P. Madrid, C. C. Peña

Notes on uniform convergence

Stochastic Differential Equations.

On the number of ways of writing t as a product of factorials

Stochastic Processes (Week 6)

SOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS

The range of tree-indexed random walk

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Chapter 2 Smooth Spaces

Convergence at first and second order of some approximations of stochastic integrals

STAT 331. Martingale Central Limit Theorem and Related Results

Faithful couplings of Markov chains: now equals forever

Lecture 17 Brownian motion as a Markov process

HOPF S DECOMPOSITION AND RECURRENT SEMIGROUPS. Josef Teichmann

arxiv: v1 [math.ap] 17 May 2017

Non-Essential Uses of Probability in Analysis Part IV Efficient Markovian Couplings. Krzysztof Burdzy University of Washington

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

A MODEL FOR THE LONG-TERM OPTIMAL CAPACITY LEVEL OF AN INVESTMENT PROJECT

Chapter 7. Markov chain background. 7.1 Finite state space

Empirical Processes: General Weak Convergence Theory

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France)

Uniqueness of the critical probability for percolation in the two dimensional Sierpiński carpet lattice

4 Sums of Independent Random Variables

Self-normalized laws of the iterated logarithm

On Finding Optimal Policies for Markovian Decision Processes Using Simulation

LMI Methods in Optimal and Robust Control

Conditional Full Support for Gaussian Processes with Stationary Increments

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

Lecture 3: Expected Value. These integrals are taken over all of Ω. If we wish to integrate over a measurable subset A Ω, we will write

An approach to the Biharmonic pseudo process by a Random walk

A note on the growth rate in the Fazekas Klesov general law of large numbers and on the weak law of large numbers for tail series

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Insert your Booktitle, Subtitle, Edition

OBSERVATION OPERATORS FOR EVOLUTIONARY INTEGRAL EQUATIONS

A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand

Lecture 21 Representations of Martingales

arxiv:math/ v2 [math.pr] 16 Mar 2007

Harmonic Functions and Brownian motion

LIMIT THEOREMS FOR SHIFT SELFSIMILAR ADDITIVE RANDOM SEQUENCES

Exercises. T 2T. e ita φ(t)dt.

Pathwise uniqueness for stochastic differential equations driven by pure jump processes

Stochastic and online algorithms

Lecture 22: Variance and Covariance

Lecture 6. 2 Recurrence/transience, harmonic functions and martingales

Riemannian geometry of surfaces

An invariance result for Hammersley s process with sources and sinks

ALMOST SURE CONVERGENCE OF RANDOM GOSSIP ALGORITHMS

Author(s) Huang, Feimin; Matsumura, Akitaka; Citation Osaka Journal of Mathematics. 41(1)

The Wiener Itô Chaos Expansion

BRANCHING PROCESSES 1. GALTON-WATSON PROCESSES

Transcription:

A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan. email: nagahata@sigmath.es.osaka-u.ac.jp URL: http://www.sigmath.osaka-u.ac.jp/ nagahata/ Nobuo Yoshida 2 Division of Mathematics Graduate School of Science Kyoto University, Kyoto 606-8502, Japan. email: nobuo@math.kyoto-u.ac.jp URL: http://www.math.kyoto-u.ac.jp/ nobuo/ Abstract We consider a class of continuous-time stochastic growth models on d-dimensional lattice with non-negative real numbers as possible values per site. We remark that the central limit theorem proven in our previous work NY09a can be extended to wider class of models so that it covers the cases of potlatch/smoothing processes. 1 Introduction 1 1.1 The model........................................... 1 1.2 Results............................................. 3 2 The proof of Theorem 1.2.1 5 2.1 The equivalence of (a) (c).................................. 5 2.2 The equivalence of (c) and (d)................................ 7 2.3 The equivalence of (a),(b ),(c )............................... 9 2.4 The equivalence of (c ) and (d )............................... 11 1 Introduction We write N = {1, 2,...}, N = {0} N, and Z = {±x ; x N}. For x = (x 1,.., x d ) R d, x stands for the l 1 -norm: x = d i=1 x i. For η = (η x ) x Z d R Zd, η = x Z d η x. Let (Ω, F, P ) be a probability space. We write P X : A = A X dp and P X = P X : Ω for a random variable X and an event A. 1.1 The model We go directly into the formal definition of the model, referring the reader to NY09a, NY09b for relevant backgrounds. The class of growth models considered here is a reasonably ample 1 September 19, 2009file name t/09/lclt2 2 Supported in part by JSPS Grant-in-Aid for Scientific Research, Kiban (C) 21540125 1

subclass of the one considered in Lig85, Chapter IX as linear systems. We introduce a random vector K = (K x ) x Z d such that 0 K x b K 1 { x rk } a.s. for some constants b K, r K 0, ), (1.1) the set {x Z d ; P K x 0} contains a linear basis of R d. (1.2) The first condition (1.1) amounts to the standard boundedness and the finite range assumptions for the transition rate of interacting particle systems. The second condition (1.2) makes the model truly d-dimensional. Let τ z,i, (z Z d, i N ) be i.i.d. mean-one exponential random variables and T z,i = τ z,1 +... + τ z,i. Let also K z,i = (Kx z,i ) x Z d (z Z d, i N ) be i.i.d. random vectors with the same distributions as K, independent of {τ z,i } z Z d,i N. We suppose that the process (η t ) starts from a deterministic configuration η 0 = (η 0,x ) x Z d N Zd with η 0 <. At time t = T z,i, η t is replaced by η t, where { K z,i η t,x = 0 η t,z if x = z, η t,x + K z,i x z η (1.3) t,z if x z. We also consider the dual process ζ t 0, ) Zd, t 0 which evolves in the same way as (η t ) t 0 except that (1.3) is replaced by its transpose: ζ t,x = { y Z d Kz,i y x ζ t,y if x = z, ζ t,x if x z. (1.4) Here are some typical examples which fall into the above set-up: The binary contact path process (BCPP): The binary contact path process (BCPP), originally introduced by D. Griffeath Gri83 is a special case the model, where { λ (δx,0 + δ x,e ) K = x Z d with probability 2dλ+1, for each 2d neighbor e of 0 1 0 with probability 2dλ+1. (1.5) The process is interpreted as the spread of an infection, with η t,x infected individuals at time λ t at the site x. The first line of (1.5) says that, with probability 2dλ+1 for each e = 1, all the infected individuals at site x e are duplicated and added to those on the site x. On the other hand, the second line of (1.5) says that, all the infected individuals at a site become 1 healthy with probability 2dλ+1. A motivation to study the BCPP comes from the fact that the projected process (η t,x 1) x Z d, t 0 is the basic contact process Gri83. The potlatch/smoothing processes: The potlatch process discussed in e.g. HL81 and Lig85, Chapter IX is also a special case of the above set-up, in which K x = W k x, x Z d. (1.6) Here, k = (k x ) x Z d 0, ) Zd is a non-random vector and W is a non-negative, bounded, mean-one random variable such that P (W = 1) < 1 (so that the notation k here is consistent with the definition (1.7) below). The smoothing process is the dual process of the potlatch process. The potlatch/smoothing processes were first introduced in Spi81 for the case W 1 and discussed further in LS81. It was in HL81 where case with W 1 was introduced and discussed. Note that we do not assume that k x is a transition probability of an irreducible random walk, unlike in the literatures mentioned above. 2

We now recall the following facts from Lig85, page 433, Theorems 2.2 and 2.3. Let F t be the σ-field generated by η s, s t. Let (η x t ) t 0 be the process (η t ) t 0 starting from one particle at the site x: η x 0 = δ x. Similarly, let (ζ x t ) t 0 be the dual process starting from one particle at the site x: ζ x 0 = δ x. Lemma 1.1.1 We set Then, k = (k x ) x Z d = (P K x ) x Z d (1.7) η t = (e ( k 1)t η t,x ) x Z d. (1.8) a) ( η t, F t ) t 0 is a martingale, and therefore, the following limit exists a.s. b) Either η = lim t η t. (1.9) P η 0 = 1 or 0. (1.10) Moreover, P η 0 = 1 if and only if the limit (1.9) is convergent in L 1 (P ). c) The above (a) (b), with η t replaced by ζ t are true for the dual process. 1.2 Results We first introduce some more notation. For η, ζ R Zd, the inner product and the discrete convolution are defined respectively by η, ζ = x Z d η x ζ x and (η ζ) x = y Z d η x y ζ y (1.11) provided the summations converge. We define for x, y Z d, β x,y = P (K δ 0 ) x (K δ 0 ) y and β x = y Z d β x+y,y (1.12) If we simply write β in the sequel, it stands for the function x β x. Note then that β, 1 = x,y Z d β x,y = P ( K 1) 2. (1.13) We also introduce: G S (x) = 0 P 0 S(S t = x)dt, (1.14) where ((S t ) t 0, P x S ) is the continuous-time random walk on Zd starting from x Z d, with the generator L S f(x) = y Z d L S (x, y) (f(y) f(x)), with L S (x, y) = k x y + k y x 2 for x y, (1.15) cf. (1.7). The set of bounded continuous functions on R d is denoted by C b (R d ). Theorem 1.2.1 Suppose d 3. Then, the following conditions are equivalent: 3

a) β, G S < 2. b) There exists a bounded function h : Z d 1, ) such that c) sup P η t 2 <. t 0 d) lim f t x Z d (L S h)(x) + 1 2 δ 0,x β, h 0, x Z d. (1.16) ( (x mt)/ ) t η t,x = η fdν in L 2 (P ) for all f C b (R d ), R d where m = x Z d xk x R d and ν is the Gaussian measure with x i dν(x) = 0, x i x j dν(x) = x i x j k x, i, j = 1,.., d. (1.17) R d R d x Z d b ) There exists a bounded function h : Z d 1, ) such that c ) sup P ζ t 2 <. t 0 d ) lim f t x Z d (L S h)(x) + 1 2 h(0)β x 0, x Z d. (1.18) ( (x mt)/ ) t ζ t,x = ζ fdν in L 2 (P ) for all f C b (R d ). R d The main point of Theorem 1.2.1 is that (a) implies (d) and (d ), while the equivalences between the other conditions are byproducts. Remarks: 1) Theorem 1.2.1 extends NY09a, Theorem 1.2.1, where the following extra technical condition was imposed: β x = 0 for x 0. (1.19) For example, BCPP satisfies (1.19), while the potlatch/smoothing processes do not. 2) Let π d be the return probability for the simple random walk on Z d. We then have that β, G S < 2 { λ > 1 2d(1 2π d ) for BCPP, P W 2 < (2 k 1)G S(0) G S k,k for the potlatch/smoothing processes. (1.20) cf. Lig85, page 460, (6.5) and page 464, Theorem 6.16 (a). For BCPP, (1.20) can be seen from that (cf. NY09a, page 965) β x,y = 1{x = 0} + λ1{ x = 1} 2dλ + 1 δ x,y, and G S (0) = 2dλ + 1 2dλ 1. 1 π d To see (1.20) for the potlatch/smoothing processes, we note that 1 2 (k + ǩ) G S = k G S δ 0, with ǩx = k x and that Thus, β x,y = P W 2 k x k y k x δ y,0 k y δ x,0 + δ x,0 δ y,0. β, G S = P W 2 G S k, k G S, k + ǩ + G S(0) = P W 2 G S k, k + 2 (2 k 1)G S (0), 4

from which (1.20) for the potlatch/smoothing processes follows. 3) It will be seen from the proof that the inequalities in (1.16) and (1.18) can be replaced by the equality, keeping the other statement of Theorem 1.2.1. As an immediate consequence of Theorem 1.2.1, we have the following Corollary 1.2.2 Suppose either of (a) (d) in Theorem 1.2.1. Then, P η = η 0 and for all f C b (R d ), ( lim (x mt)/ ) ηt,x t t η t 1 {η t 0} = fdν R d x Z d f in probability with respect to P ( η t 0, t). where m = x Z d xk x R d and ν is the Gaussian measure defined by (1.17). Similarly, either of (a),(b ),(c ),(d ) in Theorem 1.2.1 implies the above statement, with η t replaced by the dual process ζ t. Proof: The case of (η ) follows from Theorem 1.2.1(d). Note also that if P ( η > 0) > 0, then, up to a null set, { η > 0 } = { η t 0, t }, which can be seen by the argument in Gri83, page 701, proof of Proposition. The proof for the case of (ζ ) is the same. 2 The proof of Theorem 1.2.1 2.1 The equivalence of (a) (c) We first show the Feynman-Kac formula for two-point function, which is the basis of the proof of Theorem 1.2.1. To state it, we introduce Markov chains (X, X) and (Y, Ỹ ) which are also exploited in NY09a. Let (X, X) = ((X t, X t ) t 0, P x,ex X, X e) and (Y, Ỹ ) = ((Y t, Ỹt) t 0, P x,ex Y, Y e ) be the continuous-time Markov chains on Z d Z d starting from (x, x), with the generators respectively, where L X, X ef(x, x) = L X, X e(x, x, y, ỹ) (f(y, ỹ) f(x, x)), y,ey Z d and L Y, Y e f(x, x) = L Y, Y e (x, x, y, ỹ) (f(y, ỹ) f(x, x)), y,ey Z d (2.1) and L X, X e(x, x, y, ỹ) = (k δ 0 ) x y δ ex,ey + (k δ 0 ) ex ey δ x,y + β x y,ex y δ y,ey L Y, Y e (x, x, y, ỹ) = L X, X e(y, ỹ, x, x) (2.2) It is useful to note that L X, X e(x, x, y, ỹ) = 2( k 1) + β x ex, (2.3) y,ey L Y, Y e (x, x, y, ỹ) = 2( k 1) + β, 1 δ x,ex. (2.4) y,ey Recall also the notation (η x t ) t 0 and (ζ x t ) t 0 introduced before Lemma 1.1.1. 5

Lemma 2.1.1 For t 0 and x, x, y, ỹ Z d, P ζ y t,x ζ ey t,ex = P ηx t,yη ex t,ey = e 2( k 1)t P y,ey X, e X = e 2( k 1)t P x,ex Y, e Y e X, X,t e : (X t, X t ) = (x, x) e Y, Y e,t : (Y t, Ỹt) = (y, ỹ), ( ) ( t where e X, X,t e = exp 0 β X s X e s ds and e Y, Y e,t = exp β, 1 ) t 0 δ Y s, Y e s ds. Proof: By the time-reversal argument as in Lig85, Theorem 1.25, we see that (η x t,y, η ex t,ey ) and (ζ y t,x, ζ ey t,ex ) have the same law. This implies the first equality. In NY09a, Lemma 2.1.1, we showed the second equality, using (2.4). Finally, we see from (2.2) (2.4) that the operators: f(x, x) L X, e X f(x, x) + β x ex f(x, x), f(x, x) L Y, e Y f(x, x) + β, 1 δ x,ex f(x, x) are transpose to each other, and hence are the semi-groups generated by the above operators. This proves the last equality of the lemma. Lemma 2.1.2 ((X t X t ) t 0, P x,0 X, X e) and ((Y t Ỹt) t 0, P x,0 Y, Y e ) are Markov chains with the generators: L X X ef(x) = 2L S f(x) + β x (f(0) f(x)) (2.5) and L Y Y e f(x) = 2L S f(x) + ( β, f β, 1 f(x))δ x,0, respectively (cf. (1.15)). Moreover, these Markov chains are transient for d 3. Proof: Let (Z, Z) = (X, X) or (Y, Ỹ ). Since (Z, Z) is shift-invariant, in the sense that L Z, Z e(x + v, x + v, y + v, ỹ + v) = L Z, Z e(x, x, y, ỹ) for all v Z d, ((Z t Z t ) t 0, P x,ex Z, Z e) is a Markov chain. Moreover, the jump rates L Z Z e(x, y), x y are computed as follows: L Z e Z (x, y) = z Z d L Z, e Z (x, 0, z + y, z) = { k x y + k y x + δ y,0 β x if (Z, Z) = (X, X), k x y + k y x + δ x,0 β y if (Z, Z) = (Y, Ỹ ). These prove (2.5). By (1.2), the random walk S is transient for d 3. Thus, Z Z is transient d 3, since L Z Z e(x, ) = 2L S (x, ) except for finitely many x. Proof of (a) (b) (c): (a) (b): Under the assumption (a), the function h given below satisfies conditions in (b): h = 1 + cg S with c = In particular it solves (1.16) with equality. (b) (c): By Lemma 2.1.1, we have that 1) P η x t η ex t = P x,ex Y, e Y e Y, ey,t, x, x Z d, β, 1 2 β, G S. (2.6) 6

( where e Y, Y e,t = exp β, 1 ) t 0 δ Y s, Y e s ds. By Lemma 2.1.2, (1.16) reads: and thus, L Y e Y h(x) + β, 1 δ x,0 h(x) 0, x Z d (2.7) P x,0 Y, Y e e Y, Y e,t h(y t Ỹt) h(x), x Z d. Since h takes its values in 1, sup h with sup h <, we have sup P e x,0 x Y, Y e Y, ey,t sup h <. By this and (1), we obtain that sup P η x t η 0 t sup h <. x (c) (a) : Let G Y Y e (x, y) be the Green function of the Markov chain Y Ỹ (cf. Lemma 2.1.2). Then, it follows from (2.5) that G Y e Y (x, y) = 1 2 G S(y x) + 1 2 ( β, G S β, 1 G S (0)) G Y e Y (x, 0). (2.8) On the other hand, we have by (1) that for any x, x Z d, e Y, ey,t = P η x t ηt ex P η 0 t 2, P x,ex Y, e Y where the last inequality comes from Schwarz inequality and the shift-invariance. Thus, P e x,ex Y, Y e Y, ey, sup P η 0 t 2 <. (2.9) t 0 Therefore, we can define h : Z d 1, ) by: h(x) = P x,0 Y, e Y e Y, ey,, (2.10) which solves: h(x) = 1 + G Y Y e (x, 0) β, 1 h(0). For x = 0, it implies that G Y Y e (0, 0) β, 1 < 1. Plugging this into (2.8), we have (a). Remark: The function h defined by (2.10) solves (2.7) with equality, as can be seen by the way it is defined. This proves (c) (b) directly. It is also easy to see from (2.8) that the function h defined by (2.10) and by (2.6) coincide. 2.2 The equivalence of (c) and (d) To proceed from (c) to the central limit theorem (d), we will use the following variant of NY09a, Lemma 2.2.2: 7

Lemma 2.2.1 Let ((Z t ) t 0, P x ) be a continuous-time random walk on Z d starting from x, with the generator L Z f(x) = y Z d L Z (x, y)(f(y) f(x)), where we assume that x Z d x 2 L Z (0, x) <. On the other hand, let Z = (( Z t ) t 0, P x ) be the continuous-time Markov chain on Z d starting from x, with the generator L ez f(x) = y Z d L ez (x, y)(f(y) f(x)). We assume that z Z d, D Z d and a function v : Z d R satisfy L Z (x, y) = L ez (x, y) if x D {y}, D is transient for both Z and Z, v is bounded and v 0 outside D, ( def t e t = exp 0 v( Z ) u )du, t 0 are uniformly integrable with respect to P z. Then, for f C b (R d ), lim P z e t f(( Z t mt)/ t) = P z e fdν, t R d where m = x Z d xl Z(0, x) and ν is the Gaussian measure with x i dν(x) = 0, x i x j dν(x) = x i x j L Z (0, x), i, j = 1,.., d. R d R d x Z d Proof: We refer the reader to the proof of NY09a, Lemma 2.2.2, which works almost verbatim here. The uniform integrability of e t is used to make sure that lim s sup t 0 ε s,t = 0, where ε s,t is an error term introduced in the proof of NY09a, Lemma 2.2.2. Proof of (c) (d): (c) (d): Once (2.9) is obtained, we can conclude (d) exactly in the same way as in the corresponding part of NY09a, Theorem 1.2.1. Since (c) implies that lim t η t = η in L 2 (P ), it is enough to prove that U t def. = x Z d η t,x f ( (x mt)/ t ) 0 in L 2 (P ) as t for f C b (R d ) such that R fdν = 0. We set f d t (x, x) = f((x m)/ t)f(( x m)/ t). By Lemma 2.1.1, P Ut 2 = P η t,x η t,ex f t (x, x) = η 0,x η 0,ex P x,ex Y, e e Y Y, Y e,t f t (Y t, Ỹt). x,ex Z d x,ex Z d Note that by (2.9) and (c), 1) P e x,ex Y, Y e Y, ey, <. 8

Since η 0 <, it is enough to prove that for each x, x Z d lim P x,ex t Y, Y e e Y, Y e,t f t (Y t, Ỹt) = 0. To prove this, we apply Lemma 2.2.1 to the Markov chain Z def. t = (Y t, Ỹt) and the random walk (Z t ) on Z d Z d with the generator L Z f(x, x) = L Z (x, x, y, ỹ) (f(y, ỹ) f(x, x)), y,ey Z d where k ey ex if x = y and x ỹ, L Z (x, x, y, ỹ) = k y x if x y and x = ỹ, 0 if otherwise. Let D = {(x, x) Z d Z d ; x = x}. Then, 2) L Z (x, x, y, ỹ) = L Y, e Y (x, x, y, ỹ) if (x, x) D {(y, ỹ)}. Moreover, by Lemma 2.1.2, 3) D is transient both for (Z t ) and for ( Z t ). Finally, the Gaussian measure ν ν is the limit law in the central limit theorem for the random walk (Z t ). Therefore, by (1) (3) and Lemma 2.2.1, ( 2 lim P x,ex t Y, Y e e Y, Y e,t f t (Y t, Ỹt) = P x,ex Y, Y e e Y, Y e, fdν) = 0. R d (d) (c):this can be seen by taking f 1. 2.3 The equivalence of (a),(b ),(c ) (a) (b ): Let h = 2 β, G S + β G S. Then, it is easy to see that h solves (1.18) with equality. Moreover, using Lemma 2.3.1 below, we see that h(x) > 0 for x 0 by as follows: ( ) GS (x) (β G S )(x) (β G S )(0) G S (0) 1 (β G S )(0) 2 G S(x) G S (0) ( ) GS (x) > G S (0) 1 2 2 G S(x) G S (0) = 2. Since h(0) = 2 and lim x h(x) = 2 (β G S )(0) (0, ), h is bounded away from both 0 and. Therefore, a constant multiple of the above h satisfies the conditions in (b ). (b ) (c ): This can be seen similarly as (b) (c) (cf. the remark at the end of section 2.1). (c ) (a) : We first note that 1) lim x (β G S)(x) = 0, since G S vanishes at infinity and β is of finite support. We then set: h 0 (x) = P x,0 X, X e X, X,, h 2(x) = h 0 (x) 1 2 h 0(0)(β G S )(x). 9

Then, there exists positive constant M such that 1 M h 0 M and By (1), h 2 is also bounded and (L S h 0 )(x) = 1 2 h 0(0)β x, for all x Z d. (L S h 2 )(x) = (L S h 0 )(x) 1 2 h 0(0)L S (β G S )(x) = 1 2 h 0(0)β x + 1 2 h 0(0)β x = 0. This implies that there exists a constant c such that h 2 c on the subgroup H of Z d generated by the set {x Z d ; k x + k x > 0}, i.e., 2) h 0 (x) 1 2 h 0(0)(β G S )(x) = c for x H. By setting x = 0 in (2), we have On the other hand, we see from (1) (2) that c = h 0 (0)(1 β, G S ). 2 0 < 1 M lim h 0 (x) = c. x x H These imply β, G S < 2. Lemma 2.3.1 For d 3, (β G S )(x) G S(x) G S (0) ((β G S)(0) 2) + 2δ 0,x x Z d. Proof: The function β x can be either positive or negative. To control this inconvenience, we introduce: β x = y Z d P K yk x+y. Since β x 0 and G S (x + y)g S (0) G S (x)g S (y) for all x, y Z d, we have 1) G S (0)(G S β)(x) G S (x)(g S β)(0). On the other hand, it is easy to see that Therefore, using 1 2 (k + ǩ) G S = k G S δ 0, β = β k ǩ + δ 0, with ǩx = k x. 2) β G S = ( β k ǩ + δ 0) G S = β G S (2 k 1)G S + 2δ 0. Now, by (1), and (2) for x = 0, (G S β)(x) G S(x) G S (0) (G S β)(0) = G S(x) G S (0) (β G S(0) 2) + (2 k 1)G S (x). Plugging this in (2), we get the desired inequality. 10

2.4 The equivalence of (c ) and (d ) (d ) (c ): This can be seen by taking f 1. (c ) (d ):By Lemma 2.1.1, Schwarz inequality and the shift-invariance, we have that e X, = P ζ x ex,t t ζ ez t P ζ 0 t 2, for x, x Z d, P x,ex X, e X ( ) t where e X, X,t e = exp 0 β X s X e s ds. Thus, under (c ), the following function is well-defined: h 0 (x) def = P x,0 X, X e X, X,. Moreover, there exists M (0, ) such that 1 M h 0 M and We set Then, we have 0 < 1 2M h 0 M and (L S h 0 )(x) = 1 2 h 0(0)β x, for all x Z d. h 1 (x) = h 0 (x) 1 2M. L S h 1 (x) = L S h 0 (x) = 1 2 h 0(0)β x = 1 2 h 1(0)pβ x, with p = h 0(0) h 1 (0) > 1. This implies, as in the proof of (b) (c) that sup P x,ex t 0 X, X e e p X, X,t e 2M 2 < for x, x Z d, which guarantees the uniform integrability of e X, X,t e, t 0 required to apply Lemma 2.2.1. The rest of the proof is the same as in (c) (d). References Gri83 Griffeath, D.: The Binary Contact Path Process, Ann. Probab. Volume 11, Number 3 (1983), 692-705. HL81 Holley, R., Liggett, T. M. : Generalized potlatch and smoothing processes, Z. Wahrsch. Verw. Gebiete 55, 165 195, (1981). Lig85 Liggett, T. M. : Interacting Particle Systems, Springer Verlag, Berlin-Heidelberg-Tokyo (1985). LS81 Liggett, T. M., Spitzer, F. : Ergodic theorems for coupled random walks and other systems with locally interacting components. Z. Wahrsch. Verw. Gebiete 56, 443 468, (1981). NY09a Nagahata, Y., Yoshida, N.: Central Limit Theorem for a Class of Linear Systems, Electron. J. Probab. Vol. 14, No. 34, 960 977. (2009). NY09b Nagahata, Y., Yoshida, N.: Localization for a Class of Linear Systems, preprint, (2009). Spi81 Spitzer, F. : Infinite systems with locally interacting components. Ann. Probab. 9, (1981), 349 364. 11