LOCAL EQUILIBRIUM IN INHOMOGENEOUS STOCHASTIC MODELS OF HEAT TRANSPORT

Size: px
Start display at page:

Download "LOCAL EQUILIBRIUM IN INHOMOGENEOUS STOCHASTIC MODELS OF HEAT TRANSPORT"

Transcription

1 OCA EQUIIBRIUM IN INHOMOGENEOUS STOCHASTIC MODES OF HEAT TRANSPORT Abstract. We extend the duality of Kipnis Marchioro and Presutti [KMP8] to inhomogeneous lattice gas systems where either the components have different degrees of freedom or the rate of interaction depends on the spatial location. Then the dual process is applied to prove local equilibrium in the hydrodynamic limit for some inhomogeneous high dimensional systems and in the nonequilibrium steady state for one dimensional systems with arbitrary inhomogeneity. 1. Introduction Describing the emergence of local equilibrium in systems forced out of equilibrium is one of the main challenges of nonequilibrium statistical mechanics. Most mathematical works concern stochastic interacting particle systems as they are generally more tractable than the deterministic ones. Despite the big amount of recent work in this field, little is known for systems with spatial inhomogeneity. The present paper extends the classical duality result of Kipnis Marchioro and Presutti [KMP8] to inhomogeneous systems, where the inhomogeneity means that either (A) the components of the system have different degrees of freedom or (B) the interaction rate between the components depends on their location. The dual process is roughly speaking a collection of biased random walkers that interact with one another once their distance is not bigger than 1, and are completely independent otherwise. We leverage the dual process to prove the existence of local equilibrium in systems forced out of equilibrium under various assumptions. Some of these results were announced in [NY16] Informal description. To fix notation, we denote by Γ(α, c) the probability distribution with density xα 1 exp( x/c) for x > 0, where α > 0 is the c α Γ(α) shape parameter and c > 0 is the scale parameter (Γ is the usual Gamma function). The kth moment of the Gamma distribution is (1) 0 x k+α 1 exp( x/c) dx = ck Γ(α + k) c α Γ(α) Γ(α) The Beta(α, β) distribution with parameters α, β > 0 has density Γ(α+β) Γ(α)Γ(β) xα 1 (1 x) β 1 for x > 0. For brevity, we will write B(α, β) = Γ(α)Γ(β) Γ(α+β). 1

2 Our model can be informally described as follows. First consider two systems with degrees of freedom ω i and energy ξ i for i = 1,. Now let the two systems exchange energies by a microcanonical procedure: redistribute the total energy according to the law of equipartition. Representing the energy per the jth degree of freedom by X j for j = 1,..., ω 1 + ω, X is thus uniformly distributed on the sphere (ξ 1 + ξ )S ω 1+ω 1. Writing X = (ξ 1 + ξ ) Y / Y, where Y has ω 1 + ω dimensional standard normal distribution, the first system s energy is updated to ξ 1 = X X ω 1 = (ξ 1 + ξ )(Y Y ω 1 ) Y Y ω 1 +ω which is well known to have (ξ 1 + ξ )Beta(ω 1 /, ω /) distribution. In the simplest case, our model consists of a one dimensional chain of systems located at sites 1,,..., 1 and possibly having different degrees of freedom. Then we choose pairs of nearest neighbors randomly and update their energies by the above rule. Furthermore, the systems at site 1 and 1 are coupled to heat baths of different temperature. Then we study the macroscopic energy propagation and the emergence of local equilibrium for large. We also consider two more generalizations: (i) to higher dimensions and (ii) to inhomogeneity in the rate of interaction. 1.. Related works. Here, we briefly mention some previous related works although we do not intend to provide a complete review. For general introduction to interacting particle systems and their macroscopic scaling limits, see [85, S91, K99]. Two classical important references for dual Markov processes are [S70] and [85]. The present paper is an extension of [KMP8]: our one dimensional case with constant degrees of freedom and homogeneous rate of interaction is the KMP model. Some earlier extensions of the KMP, relevant to the present work are the following. The paper [RY07] provides a one dimensional model where a tracer is liable for the transportation of energy. The authors apply certain martingale techniques (suggested by S.R.S. Varadhan) to derive local equilibrium. We will use similar techniques in Section 7, note however that the adaptation to our non homogeneous setup is far from being obvious. The thermalized Brownian energy process, introduced in [CGGR13], corresponds to our model with constant (not necessarily ) degrees of freedom, dimension 1, homogeneous rate of interaction. Next, [CGRS16] defines an asymmetric version of the thermalized Brownian energy process (roughly speaking, the system on the left is preferred in the redistribution of energies). Finally, [NY16] is a homogeneous model with tracers but in arbitrary dimension. All the papers mentioned in this paragraph have some dual process similar to ours. The moral of our proof is to show that in the dual process, the particles behave asymptotically as if they were independent. If this can be verified,

3 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 3 then the local equilibrium is an elementary consequence. An early application of this idea is in [GKMP81], see also [DMP83, DMP91, OR15], etc. A chain of alternating billiard particles ( degrees of freedom) and pistons (1 degree of freedom), has been proposed in [BGNSzT15]. It is expected that here the mathematically rigorous study of the rare interaction limit (at least on a mesoscopic level) is easier than that of the more realistic chain of interacting hard spheres (all having degrees of freedom, [BPS9, GG08]). Note however that in deterministic systems the energies are updated by a much more complicated rule Random walks end electrical networks. Here, we review the connection between one dimensional random walks and electrical networks as we will need them in the study of the dual process. The connection extends to much more general graphs than Z 1, see e.g. [K14], Section 19. Assume that the weights (conductances) w i+1/, i = A, A + 1,..., B, are given and a random walker S k is defined on the set {A, A + 1,..., B} by S(0) = I and P(S k = i+1 S k = i) = w i+1/ w i 1/ + w i+1/, P(S k = i 1 S k = i) = w i 1/ w i 1/ + w i+1/ for all i = A + 1,..., B 1 (the states A and B are absorbing). Then the resistances R i+1/ = 1/w i+1/ and the hitting probability p = P(min{k : S k = 0} < min{k : S k = N}) are connected by the well known formula () p = B 1 i=i R i B 1 i=a R i Finally, to fix terminology, we denote by SSRW the simple symmetric random walk, i.e. a random walk with independent steps, uniformly distributed on the neighbors of the origin in Z d Organization. The rest of the paper is organized as follows. We define our model precisely in Section. In Section 3, the dual process is defined and the duality relation is proved. In Section 4 the main results, namely local equilibrium in the hydrodynamic limit in dimension and in the nonequilibrium steady state in dimension 1, are formulated. Section 5 contains the proofs of the theorems concerning the hydrodynamic limit, namely Theorems 1 and, except for the proof of Proposition 6. Then Proposition 6 is proved in Section 6 (except for the proof of emma 6). Section 7 is the proof of Theorem 3, which is the case of nonequilibrium steady state. Finally, the Appendix contains the proof of emma 6.

4 4. The model et us fix a dimension d 1, and D R d, a bounded, connected open set for which D is a piecewise C submanifold with no cusps. We prescribe a continuous function T : R d \ D R + to be thought of as temperature. For 1, the physical domain of our system is D = D Z d. At each lattice point z D (which will be called a site) there is a physical system of ω z degrees of freedom. The rate of interaction along the edge e = (u, v) E(D ) is denoted by r e = r (u,v) (here, E(D ) is the set of edges of the lattice Z d restricted to D ). Finally, we also fix rates of interaction between boundary sites and the heat bath: r (u,v) for u D, v Z d \ D with u v = 1. Throughout this paper, denotes the cardinality of a finite set, and for v R d, v is its closest point in Z d. The time evolution of the energies X(t) = X () (t) = (ξ () v (t)) v D is a Markov process with generator (Gf)(ξ) = (G 1 f)(ξ) + (G f)(ξ), where G 1 describes interactions within D and G stands for interaction with the bath. We define G 1 as follows. There is an exponential clock at each edge e = (u, v) E(D ) of rate e (u,v). When it rings, the energies of the two corresponding systems (ξ u and ξ v ) are pooled together and redistributed according to a Beta(ω u /, ω v /) distribution. Thus 1 1 (G 1 f)(ξ) = r (u,v) 0 B ( ω u )p ωu, ωv 1 (1 p) ωv 1 [f(ξ ) f(ξ)]dp where (u,v) E(D ) ξ w if w / {u, v} ξ w = p(ξ u + ξ v ) if w = u (1 p)(ξ u + ξ v ) if w = v. Every edge e = (u, v) where u D and v Z d \ D provides connection to the heat bath of temperature T (v/): with rate r (u,v), ξ u is updated to Γ ( ω u, T ( v )). That is, (G f)(ξ) = where u D,v Z d \D, u v =1 ξ w = r (u,v) 0 [ ] η ωu 1 exp T( η ) v [ ( T v )] ωu Γ ( ω u ) [f(ξ ) f(ξ)]dη { ξw if w u η if w = u. This completes the definition of X () (t).

5 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 5 3. The dual process As mentioned in the Introduction, we want to understand the asymptotic behavior of X(t) by switching to a dual process. This section is devoted to the discussion of duality. For D R d and as in Section, we now introduce a Markov process Y () (t) = ((n () v (t)) v D, (ˆn () v (t)) v B ) designed to carry certain dual object, which we call particles, from sites in D to B = {v Z d \ D : v D : u v = 1}. Here, n v the number of particles at site v D and ˆn w the number of particles permanently drooped off to the storage at w B. The generator of the process Y (t) is given by (Af)(n) = (A 1 f)(n) + (A f)(n), where A 1 corresponds to movements inside D and A corresponds to the process of dropping off the particles to the storage. That is, (A 1 f)(n) = n v+n w k=0 ( nu + n v k (u,v) E(D ) r (u,v) where n w if w / {u, v} (3) n w = k if w = u. n v + n w k if w = v. ) B(k + ω u, n u + n v k + ωv ) B( ωu, ωv ) [f(n ) f(n)] and ˆn w = ˆn w w B. Recall that in case of the process X, the energies are redistributed according to a beta distribution with parameters ω u /, ω v /. In case of the dual process Y, we redistribute the particles with the so called beta binomial distribution: first we choose a p according to Beta(ω u /, ω v /), then we choose n u with binomial distribution of parameters n u + n v, p and n v = n u + n v n u. The second part of the generator is given by (A f)(n) = r (u,v) [f(n ) f(n)] u D,v B, u v =1 where n w = { nw if w u 0 if w = u. { and ˆn w = ˆn w ˆn v + n u if w v if w = v This completes the definition of Y () (t).

6 6 Now we turn to the duality. et us define the function with respect to which the duality holds (4) F (n, ξ) = ξnu u Γ(ω u /) [ ( v T Γ(n u + ω u /) )]ˆnv u D The duality with respect to F means that v B Proposition 1. For any ξ any n and any t > 0, E(F (n, X(t)) X 0 = ξ) = E(F (Y (t), ξ) Y 0 = ξ) Proof. Clearly it is enough to prove that for any ξ and n, GF (ξ, n) = AF (ξ, n). To prove this, we consider the following two cases, where Case i corresponds to G i and A i for i = 1,. Case 1. The clock on the edge (u, v) rings. The term corresponding to u and v in G 1 F can be written as r (u,v) I II, where ξw nw Γ(ω w /) [ ( w )]ˆnw I = Γ(ω u /)Γ(ω v /) T Γ(n w + ω w /) and II = Then we compute = = = II + w D \{u,v} 1 0 pnu+ ωu 1 ωv nv+ (1 p) 1 1 B( ωu, ωv ξ nu u ξ nv v Γ(n u + ωu )Γ(n v + ωv 1 1 B( ωu, ωv ) Γ(n u + n v + ωu+ωv Γ(n u + ωu )Γ(n v + ωv ) ) = 1 1 B( ωu, ωv ) Γ(n u + n v + ωu+ωv n 1 u+n v ( ) nu + n v B( ωu, ωv ) B k k=0 ) (ξ u + ξ v ) nu+nv ) n u+n v k=0 w B )(ξ u + ξ v ) nu+nv dp ξu nu ( ) nu + n v k ξu nu ξv nv ( k + ω u, n u + n v k + ω v Thus r (u,v) I II is the term corresponding to u and v in A 1 F. ) ξ nv v ξ nu u Γ(k + ωu ξnv v ) Case. The energy at site u D is updated by the heat bath at v B (where u v = 1). As before, we write the term corresponding to (u, v) in G F as r (u,v) I II, where I = u D \{u} ξ n u u Γ(ω u /) Γ(n u + ω u /) v B \{v} [ T ( v )]ˆnv Γ(n k + ωv )

7 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 7 and II = ωu Γ( ) Γ(n u + ωu ) 0 [ ] η nu+ ωu 1 exp T( η ) v [ ( [ ( T v )] ωu Γ ( ω u ) dη ξu nu v T )]ˆnv By (1), we obtain that II + ξnu u Γ( ωu ) [ ( v [ ( v u Γ(n u + ωu ) T = T )]ˆnv )]ˆnv+n Thus r (u,v) I II is the term corresponding to v in A F. Note that the process Y preserves the total number of particles, which will be denoted be N. We conclude this section with the following simple lemma. emma 1. The restriction of the process Y to arbitrary subset of K particles (with K < N) is also a Markov process and satisfies the definition of Y with N replaced by K. Proof. Without loss of generality, we can consider a system with K = N 1 particles, assume that the clock attached to the edge (v, w) rings and the union of the particles at sites u and v prior to the mixing are labeled {1,..., n}. Then the probability that after the mixing, the set of particles at site u is exactly {j 1,...j l } {1,,..., n} is given by ( ) n Γ(l + ωu /)Γ(n l + ω v /) Γ(ω u / + ω v /) 1 (5) p(j 1,..., j l ) = ( l Γ(n + ω u / + ω v /) Γ(ω u /)Γ(ω v /) n l) Now assume we add a new particle (of index N). If this new particle is not at sites u or v, then clearly the situation is not disturbed. If it is there, we compute (6) p(j 1,..., j l ) + p(j 1,..., j l, N + 1) = Γ(l + ω u /)Γ(n + 1 l + ω v /) Γ(ω u / + ω v /) Γ(n ω u / + ω v /) Γ(ω u /)Γ(ω v /) + Γ(l ω u/)γ(n l + ω v /) Γ(ω u / + ω v /) Γ(n ω u / + ω v /) Γ(ω u /)Γ(ω v /) An elementary computation shows that (5) is equal to (6). The lemma follows. 4. ocal Equilibrium et d, D and be as before such that D is connected. First we state the existence and uniqueness of invariant measure in the equilibrium case.

8 8 Proposition. Fix arbitrary functions ω : D Z + and r : E(D ) R +. If T is constant, then µ () e = ( ωv Γ, 1 ) T v D is the unique invariant probability measure of the process X () (t). Proof. The discussion in Section 1.1 implies the following statement (which is actually well known, see e.g. emma 3 in [CHS81]). et ξ 1 and ξ be independent Gamma distributed random variables with shape parameter k and l, respectively and with the same scale parameter. et Z be independent from X and Y and have Beta distribution with parameters k/ and l/. Then the pair (Z(ξ 1 + ξ ), (1 Z)(ξ 1 + ξ )) has the same distribution as (ξ 1, ξ ). Proposition follows. Our primary interest is in the out-of-equilibrium settings where the bath temperature is non constant: Proposition 3. et ω : D Z + and r : E(D ) R + be arbitrary. The process X () (t) has a unique invariant probability measure µ (). Furthermore, the distribution of X () (t) converges to µ () as t for any initial distribution of X () (0). We skip the proof of Proposition 3 since it is very similar to the analogous propositions in earlier similar models, see Proposition 1. in [RY07], Proposition in [NY16] Hydrodynamic limit. In order to discuss the local equilibrium in the hydrodynamic limit, we need some definitions. First we introduce some properties of the initial measures. Definition 1. We say that X(0) is associated with f if for any fixed δ and any k ( k ) k (7) E ξ v () ω vi i (0) f(v i/) i=1 as uniformly for every v 1,..., v k D satisfying v i v j δ for i j and with some fixed continuous function f : R d R + such that f R d \D = T Definition. We say that X(0) satisfies the uniform moment condition if there are constants C k such that E(ξv k (0)) < C k for every and for every v D. Recall that the évy-prokhorov distance is the metrization of weak convergence of measures. i=1

9 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 9 Definition 3. We say that X () (t) approaches local equilibrium in the hydrodynamic limit at x D and t > 0 if for any finite set S Z d the évy- Prokhorov distance of the distribution of X () (t ) restricted to the components (ξ x +s ) s S and Γ ( ω x +s /, u(t, x) ) s S converges to zero as. We will choose the initial distributions, i.e. the distributions of X () (0), which are associated with a continuous function f. The interesting question is that what kind of equation defines u for different choices of ω and r. In any case, we expect that the initial condition is given by f and the boundary condition by T. We will consider the two simplest cases here. Theorem 1. Assume d and ω v = ω 0 Z + for every site v. Assume furthermore that r (u,v) = R( u+v) for every u D and v Z d with u v = 1, where R C (R d, R + ). Also assume that X () (0) is associated with f, a continuous function f : D R+, and satisfy the uniform moment condition. Then X () (t) approaches local equilibrium in the hydrodynamic limit for all x D and t > 0 with u the unique solution of the equation u t = (R u), u(0, x) = f(x), u(t, x) D = T (x). Remark about the initial conditions Note that f represents the energy per degrees of freedom at time zero, that is why we need the multiplier ω v / on the right hand side of (7). We do not have to assume local equilibrium at zero, which would correspond to the special choice of X(0): the product of Gamma distributions (with shape parameter ω v /, and scale parameter f(v/)). One interesting consequence of Theorem 1 (and similarly that of Theorem ) is that the system satisfies the local equilibrium for arbitrary positive macroscopic time even if it only satisfies the given weaker condition at time zero. Since we want to leverage the duality via moments, we also need to assume some condition on the higher moments. The simplest one is the uniform moment condition. Most probably, neither condition (7) nor the uniform moment condition is optimal, but we do not pursue the most general case here. Our next choice is the simplest non-continuous environment: we consider D = [ 1, 1] d with one of the functions ω and r being constant on D and the other one is constant on [ 1, 0] [ 1, 1] d 1 and [0, 1] [ 1, 1] d 1. Thus we have the following Theorem. et d, D = [ 1, 1] d. Assume that X () (0) is associated with f, a continuous extension of T to D, and satisfies the uniform moment condition

10 10 (a) et ω v = ω 1 if v 1 < 0 and ω v = ω 1 if v 1 0 with some positive integers ω 1, ω 1 and let r be constant. Then X () (t) approaches local equilibrium in the hydrodynamic limit for all x D with x 1 0 and all t > 0 with u the unique solution of the equation u t = r u for x D with x 1 0, ω 1 u(t, (0, x x 1 +,..., x d )) = ω 1 u(t, (0, x x 1,..., x d )) u(0, x) = f(x), u(t, x) D = T (x). (b) et r (u,v) = r 1 if u 1 + v 1 < 0 and r (u,v) = r 1 otherwise, where r 1, r 1 are fixed positive numbers. Similarly, r w = r 1 if w 1 < 0 and r w = r 1 otherwise. et ω be constant. Then X () (t) approaches local equilibrium in the hydrodynamic limit for all x D with x 1 0 and all t > 0 with u the unique solution of the equation u t = r sign(x1 ) u for x D with x 1 0, r 1 u(t, (0, x x 1 +,..., x d )) = r 1 u(t, (0, x x 1,..., x d )) u(0, x) = f(x), u(t, x) D = T (x). 4.. Nonequilibrium steady state. Now we are interested in the invariant measure of the Markov chains for finite (but large). Specifically, we are looking for a function u(x), x D such that in the limit lim lim t the local temperature exists and is given by u(x). In case the hydrodynamic limit is known, u(x) is expected to be equal to lim t u(t, x). We will choose arbitrary environment (r and ω), that is why we define the local equilibrium in a little more general form, namely with an dependent u. Of course in all natural examples, one expects u () to converge. Definition 4. We say that X () (t) approaches local equilibrium in the nonequilibrium steady state if for any x D and any finite set S Z d the évy- Prokhorov distance of the invariant measure of X () (t) restricted to the components (ξ x +s ) s S and Γ ( ω x +s /, u () (x) ) s S converges to zero as. et us fix d = 1 and D = (0, 1). To simplify notation, we will write r m+1/ := r (m,m+1), ω 0 := ω 1 and ω := ω 1. Furthermore, we will need the following two definitions: ψ(m) = ω m 1 + ω m r m 1/ ω m 1 ω m for 1 m

11 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 11 and A () (x) = x m=1 ψ(m) m=1 ψ(m). Theorem 3. et d = 1 and D = (0, 1). Assume that the functions r and ω are bounded away from zero and infinity uniformly in and the temperature on the boundary is given by T (0), T (1) R + {0}. Then X () (t) approaches local equilibrium in the nonequilibrium steady state with u () (x) = (1 A () (x))t (0) + A () (x)t (1). Clearly, u () (x) can easily diverge in this generality. That is why we consider two special cases. In case (a), r is constant and ω is random. In this case, we prove the quenched local equilibrium in the nonequilibrium steady state, i.e. the almost sure convergence of u () (x) to a deterministic limit. In case (b), ω is constant but r is prescribed by a non-constant macroscopic function. We also prove that u () (x) converges in this case. Proposition 4. (a) et r be constant, K be the maximal degrees of freedom and let us fix some continuous function K κ : [0, 1] {p R K : p 0, p i = 1}. For each, let us choose ω v () another with P(ω () v i=1 randomly and independently from one = i) = κ i (v/). Then for almost every realization of the random functions ω (), lim A() (x) = K 1 x i=1 i K 1 i=1 i κ 0 i(y)dy 1 κ 0 i(y)dy. (b) et ω be constant, ϱ : [0, 1] R + is a continuous function. For each and v = 0, 1,... 1 define r () v+1/ = ϱ(v/).then x lim A() 0 (x) = 1/ϱ(y)dy 1 1/ϱ(y)dy. 0 Before turning to the proofs of the above results, we briefly comment on some possibilities of extension Possible extensions. As we will see, the proof of Theorems 1 and also provides the local equilibrium in the nonequilibrium steady state. Corollary 4. Consider the setup of either Theorem 1 or. Then X () (t) approaches local equilibrium (assuming x 1 0 in case of Theorem ) in the nonequilibrium steady state with U(x) = lim t u(t, x).

12 1 Indeed, for δ > 0 fixed one can find some t large such that with probability at least 1 δ, all processes in Proposition 6 have arrived at the boundary before t. In such cases, Ỹ () i (t ) = Ỹ () i ( ), and the latter can be used to prove local equilibrium in the nonequilibrium steady state (cf. the proof of Theorem 3). Furthermore, it seems likely that our proof could be adapted to a version of Theorem with more general domains and with piecewise constant r and ω (see [P90] for the extension of skew Brownian motion to such scenarios). However, the case of more general inhomogeneity in either high dimensions or in the hydrodynamic limit can be difficult. For example, if κ is constant in Proposition 4(a), then in order to verify the hydrodynamic limit, one would have to compute the scaling limit of some interacting random walkers among iid conductances, whereas even the case of one random walker is non obvious (see [SSz04]). Clearly, the case of high dimensions or non iid environments are even much harder. 5. Proof of Theorems 1 and Since the proofs of Theorems 1 and are very similar, we provide one proof and distinguish between cases Theorems 1, (a) and (b) if necessary. et us fix some t > 0, a point x D and a finite set S Z d. We need to show that the (joint) distribution of (ξ x +s (t )) s S converges to the product of gamma distributions with the scale parameter u(t, x). As it is well known, any product of Gamma distributions is characterized by its moments (see [Z83]). Thus it is enough to prove that the moments of (ξ x +s ) s S converge to the product of the moments of gamma distributions (convergence of second moments implies tightness). When computing the moments of order n s N, s S, we can use Proposition 1 to switch to the dual process. We need to introduce some auxiliary processes. et us denote by Ỹ the slight variant of Y where the position of distinguishable particles are recorded. () More precisely, the phase space of Ỹ = (Ỹ i (t)) 0 t,1 i N is (D B ) N, where N = s S n s and the initial condition is given such that for all s S, For any t 0 we define Ỹ (t) so that #{i : Ỹi(0) = x + s} = n s. (8) #{i : Ỹi(t) = v D } = n v (t k ), #{i : Ỹi(t) = v B } = ˆn v (t k ) and for all t, #{i : Ỹi(t ) Ỹi(t+)} 1. We want to show that the diffusively rescaled version of Ỹ converges weakly to N independent copies of some (generalized) diffusion processes Y. Then the Kolmogorov backward equation associated with these processes will provide the function u. Clearly, the process Y will have to be stopped on D. So as to shed light on the main component of the proof, namely the convergence to

13 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 13 the diffusion process, we introduce a further simplification by not stopping the particles. We can assume that R is bounded in case of Theorem 1 (possibly by multiplying with a smooth function which is constant on D and decays quickly) and extend the definition of ω v, r (u,v) in case of Theorem for any u, v Z d. Then we consider the process Z(t) = (n v (t)) v Z d on the space z v Z +, z v = N v Z d with the generator A 1, obtained from A 1 by replacing (u,v) E(D ) with (u,v) E(Z d ). Then we define Z from Z the same way as we defined Ỹ from Y. Observe that by construction (and with the natural coupling) for N = 1 we have (9) Ỹ () (s) = Z(s τ D ) where τ D = min{s : Z(s) / D } Now we define the limiting process (Z(s)) 0 s t with Z(0) = x and with the generator d i=1 R(x) + d R x i=1 i x i x i in Theorem 1 d = i=1 r in Theorem (a) x i d i=1 r sign(x 1 ) in Theorem (b). acting on x i (1) functions φ C0 in case of Theorem 1 () compactly supported continuous functions φ which admit C extensions on (R 0) R d 1 and (R + 0) R d 1 and { ω 1 φ(0, x x 1,..., x d ) = ω 1 φ(0, x x 1 +,..., x d ) in case of Theorem (a) r 1 φ(0, x x 1,..., x d ) = r 1 φ(0, x x 1 +,..., x d ) in case of Theorem (b) Note that Z is a diffusion process in case of Theorem 1 and a generalized diffusion process in case of Theorem (see [P90] for a survey on generalized diffusion processes and [A78] for an early proof of the existence of Z in case of Theorem (b)). Now let us stop Z on D and define (10) Y(s) = Z(s τ D ) where τ D = min{s : Z(s) D} The connection between the processes Ỹ and the PDE s defining u is most easily seen in the simplest case of one particle. et denote weak convergence in the Skorokhod space D[0, T ] with respect to the supremum metric and with some T to be specified. (Although we need the Skorokhod space as the trajectories of Z are not continuous, but the limiting measures will always be supported on C[0, T ] and we can use the supremum metric. Alternatively, one could smooth the trajectories of Z and only use the space C[0, T ].)

14 14 emma. If N = 1, then ( ) Z(s ) 0 s t (Z(s)) 0 s t. Proof. In the setup of Theorem 1, this follows from Theorem in [SV07]. More precisely, as we can neglect events of small probability, we can assume that (A) the particle jumps less than 3 times before t and consequently (B) the smallest time between two consecutive jumps before t is bigger than 4. Now choosing h = 4, Z can only jump at most once on the interval [kh, (k + 1)h] for all k < t 6. Now we choose Π h ( z, z + e i ) ( x + = ei / R ), Π h ( z, z ) = 1 ( ) x + ei / R e i for any z Z d and any unit vector e i. With this choice, we easily see that a ii (x) = R(x) and b i (x) = x i R(x) at the end of page 67 in [SV07]. Applying Theorem 11..3, the emma follows. In case of Theorem (a), we consider the first coordinate of Z and the other d 1 coordinates separately. Under diffusive scaling, the former one converges to a skew Brownian motion by e.g. [ChShY04], while the latter one converges to a d 1 dimensional Brownian motion by Donsker s theorem. A slight technical detail is that we need to switch to discrete time so as to apply the result of [ChShY04]. et us thus define Z 1(k) = Z 1 (τ 1,k ) for non-negative integers k, where τ 1,0 = 0 and τ 1,k = min{t > τ 1,k 1 : Z1 (t ) Z 1 (t+)}. Then by [ChShY04] we have that ( ) Z (11) 1 (s ) ( SBM ω1 /(ω 1 +ω 1 )(s) ), 0 s 4rt 0 s 4rt where SBM ω1 /(ω 1 +ω 1 ) is the skew Brownian motion with parameter ω 1 /(ω 1 + ω 1 ), which is by definition equal to Z 1 (s)/ r. Now observe that by the law of large numbers, the functions s rτ 1,s /(s ) converge in probability to the identity function on D[0, t]. Hence the statement of the emma, restricted to the first component, follows. Finally, the other components are independent from the first one and under diffusive scaling, they converge to Brownian motion by Donsker s theorem. In case of Theorem (b) we use a similar argument to the one in the previous paragraph. Namely, we still have the analogue of (11) and by independence ( ) Z (s ) (1) ( SBM r1 /(r 1 +r 1 )(s/d), W d 1 (s/d) ), 0 s T where 0 s T T = 4d max{r 1, r 1 }t, W d 1 is a d 1 dimensional standard Brownian motion and Z (k) = Z(τ k ) with τ 0 = 0 and τ k = min{t > τ k 1 : Z(t ) Z(t+)} for positive integers k.

15 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 15 The difference from the case of Theorem (a) is that now in order to recover the convergence of Z, we need a nonlinear time change (and consequently have to work with all coordinates at the same time). To do so, let us introduce the local time the first coordinate spends on the negative and positive halfline: τ (s) = s 0 1 f1 (y)<0dy, τ + (s) = for f = (f 1,..., f d ) D([0, T ], R d ). Now we can define and by (Θ(f))(s) = s 1 f1 (y)>0dy, 0 ρ(s) = min{y : τ (y) dr + τ +(y) dr + s} Θ : D([0, T ]) D([0, 3t/]) { f(ρ(s)) if τ (T ) + τ + (T ) 3T/4 0 otherwise. The function Θ is well defined because of the choice of T. et us denote by Q and Q the measure on D[0, T ] given by the left and right hand sides of (1). Next, we observe that Q(τ (T ) + τ + (T ) = T ) = 1. and Θ is continuous on the set {τ (T ) + τ + (T ) = T }. Now we can apply the continuous mapping theorem (see e.g. Theorem 5.1 in [B68]) to conclude Θ Q Θ Q. Here, Θ Q is the distribution of a process (Ẑ(s)) 0 s 3t/ obtained from (Z (s)) 0 s T by rescaling time with Θ. Since Θ Q is the distribution of Z(s), it suffices to show that ( ) Z(s ) (13) Ẑ(s ) 0. 0 s t et us introduce the notations z k = Z (k) Z (k 1), T i = τ i τ i 1. By an elementary estimate on the local time of random walks, we have P(#{k < t : Z (k) 1 = 0} > 3/ ) = o(1). Since we prove weak convergence, we can neglect the above event and subsequently assume that z 1,..., z t is such that the first coordinate s local time at zero is smaller than 3/4. Especially, we use the first line of the definition of Θ when we construct Ẑ for large. Thus with the notation ĩ s = max{i : i T j < s} î s = max{i : j=1 i E(T j 1 {Z (j) 1 0} z 1,..., z t ) < s}. j=1

16 16 we have Z(s) = Z (ĩ s ) and Ẑ(s) = Z (î s ). Now with some fixed small δ let us write k k = and = i [kδ,(k+1)δ ] i [kδ,(k+1)δ ],Z(i) 1 0 for k = 1,,..., T/δ. Then by Chebyshev s inequality, ( ) k k P Ti E(Ti z 1,..., z t ) > δ3 z 1,..., z zt < δ 1 4d min{r1, r 1} δ 6 < 4 δ10 for large enough. Furthermore, by our assumption on z 1,..., z t, ( ) k k P Ti Ti > δ 3 z 1,..., z t < δ 10 also holds for large. Consequently the random times s l = l k=1 k E(T i z 1,..., z t ) and ŝ l = l /δ i=1 T i satisfy 0 = s 0 < s 1 < s... < s T, s T 3t/ and s l s l 1 < Cδ for all l with s l < t, P( ŝ l s l > δ ) < δ 8 and Ẑ(ŝ l ) = Z( s l ) This together with tightness of Z (proved in the usual way) gives (13). We have finished the proof of emma. emma 3. If N = 1, then ) (Ỹ () (s ) 0 s t (Y(s)) 0 s t. Proof. This is a consequence of the continuous mapping theorem, emma, (9) and (10). Next, we prove a simpler version of Theorems 1 and, namely the convergence of the expectations Proposition 5. Consider the setup of either Theorem 1 or Theorem. Then ( ) (14) lim E ξ () x (t ) = u(t, x) ω x, where ω x = ω in case of Theorems 1 and (b) and = ω sign(x1 ) in case of Theorem (a) Proof. By Proposition 1, the left hand side of (14) is equal to ( ) ω x E ξ (Ỹ (t 1 Ỹ (t ) { Ỹ (t ) D } + T ) ωỹ (t ) 1 { Ỹ (t ) B } ) =: I + II,

17 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 17 where ξv = ξ v (0). In case of Theorems 1 and (b), ω is constant, thus we obtain I = P(Ỹ (t ) = v)e(ξv). v D Recall that (7) gives E(ξv) = ω f(v/) + o(1). Applying emma 3 and the definition of the weak convergence, we obtain Similarly, lim lim I = ω E(f(Y(t))1 {Y(t) D}) II = ω E(T (Y(t))1 {Y(t) D}) Now the proposition follows from the fact that u(t, x) solves the Kolmogorov equation associated with the process Y. Finally, in case of Theorem (a), we consider the function h on D with h(y) = 0 if y 1 < 0, h(y) = y 1 /δ if 0 y 1 < δ and h(y) = 1 if y 1 > δ and approximate I by Ia+Ib, where Ia is obtained from I by multiplying the integrand with h(ỹ (t )/) and Ib is obtained from I by multiplying the integrand with h( Ỹ (t )/). Clearly, I ε < Ia+Ib < I for δ small enough. Then we can repeat the above argument since ω is constant on the integration domain in both Ia and Ib. In order to complete the proof of Theorems 1 and, we need an extension of emma 3 from N = 1 to arbitrary N: Proposition 6. The processes ) () (Ỹ i (s ) 0 s t for i = 1,..., N converge weakly to N independent copies of (Y(s)) 0 s t. Now, we prove Theorems 1 and assuming Proposition 6. By the discussion at the beginning of this section and by (1), Theorems 1 and will be proved once we establish (15) E ( s S ξ n s x +s (t ) ) u(t, x) N s S By Proposition 6, lim P(A () δ ) = o δ (1), where A δ = A () δ = { Ỹ () i Γ(n s + ω x +s /). Γ(ω x +s /) (t ) Ỹ () (t ) δ for some i j}. j This, combined with the uniform moment condition and the Cauchy-Schwarz inequality gives lim E ( 1 Aδ F (Y (t ), ξ ) ) = o δ (1),

18 18 where ξ v = ξ v (0) and F is the duality function defined in (4). Now Proposition 1 and the fact that ξ is associated with f implies that the left hand side of (15) is o δ (1) close to Γ(n s + ω x +s /) Γ(ω s S x +s /) [ N ( E 1 Aδ i=1 ξ Ỹ (t ) ωỹ (t ) ) )] (Ỹ (t 1 { Ỹ (t ) D } + T ) 1 { Ỹ (t ) B }. Now we can cut this integral to N pieces and apply a version of the proof of Proposition 5 to conclude (15). In order to complete the proof of Theorems 1 and it only remains to prove Proposition 6, which is the subject of the next section. 6. Proof of Proposition 6 We are going to prove a variant of Proposition 6 obtained by replacing Ỹi and Y with Z and Z. Proposition 6 follows from this variant the same way as emma 3 follows from emma. The idea of the proof is borrowed from [NY16], Section 5: we show that with probability close to 1, Zi (s) and Z j (s) will not meet after getting separated by a distance γ with some γ close to 1. Since Z i (s) and Z j (s) move independently if their distance is bigger than 1, we can replace Z(s) for s > τ = max{τ i,j } by N independent copies of Z 1 (s), where τ i,j is the first time s when Z i (s) Z j (s) > γ. It only remains to show that Z i (τ) is close to x and τ/ is negligible. We complete this strategy in the case of Theorem 1, N = and d = in Section 6.1 and for all other cases in Section Case of Theorem 1, N = and d =. Recalling the notation of Section 5, let us write Z(k) = Z (k) Z 1(k). Note that Z is not a particularly nice process: it is neither Markov, nor translation invariant. As long as both Z 1(k) and Z (s) are ε-close to x, Z is well approximated by a Brownian motion. In particular, if Z = M, then the probability that Z reaches M/ before reaching M is close to 1/ and thus Z performs an approximate simple symmetric random walk (SSRW) on the circles C m = {z Z : z m < }. We will estimate the goodness of this approximation by a SSRW. Assuming that Z(s 0 ) C m, denote by s 1 the smallest s > s 0 such that Z(s) C m 1 or Z(s) C m+1. We also write log = log, M = m, E,M = P(Z(s 1) C m 1 ) 1.

19 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 19 Note that s 1 is a stopping time with respect to the filtration generated by Z (s), Z 1(s). We will need a series of lemmas. emma 4. If Z(k), then P (Z(k + 1) Z(k) = e) = O ( Z(k) for e = (1, 0), ( 1, 0), (0, 1), (0, 1). Here, the constants involved in O only depend on the C norm of R. Proof. Using that R is a C function on a compact domain containing D, the lemma follows from the definition. emma 4 enables us to couple Z(k) with a planar SSRW W (k) such that P(Z(k) Z(k 1) = W (k) W (k 1) Z(k)) 1 C Z(k). We are going to apply such a coupling several times in the forthcoming lemmas. emma 5. There are constants ε 0 > 0, C 0 < and θ < 1 such that assuming Z(s 0 ) = M < ε 0. P(s 1 > n) < C 0 θ n/m Proof. To verify emma 5, we couple Z(k), k [s 0, s 0 + M ] to a SSRW W (k), k [s 0, s 0 + M ], W (s 0 ) = Z(s 0 ). By emma 4, we can guarantee that Z(k) W (k) < CM 3 / for all k < M with some probability bounded away from zero. Here, C only depends on the C norm of R. Thus by choosing ε 0 small, we can guarantee CM 3 / < M/10. Since W is close to a Brownian motion, it reaches C m or C m+ before M with some positive probability. Consequently, there is some p independent of such that P(s 1 < M ) > p. Applying this argument in an inductive fashion gives emma 5. et us introduce the notation (16) τ γ = τ γ () = min{k : Z(k) > γ } Now we claim the following emma 6. For every γ < 1 with 1 γ small, there exists some ξ > 0 such that for large enough, the following estimates hold. (a) Tiny gap: If m < 3 log, then 5 (b) Small gap: If P(τ 3/5 > 13/10 ) = O( 1/10 ) 3 10 log m < γ log, then E,M = O( ξ ) )

20 0 (c) Moderate gap: If γ log < m < log log log, then E,M = O(log 3/ ) (d) arge gap: There is some ε > 0 such that if log log log < m < log + log ε, then E,M < 1/100 As the proof of emma 6 is slightly longer than the other lemmas, we postpone it to the Appendix. Next, we formulate our key lemma: emma 7. For any small δ > 0, there exists γ < 1 such that for large enough, P( k : τ γ < k < t : Z(k) ) > 1 δ. Proof. In order to derive emma 7 from emma 6, recall the connection of random walks and electrical networks from Section 1.1. Using the notation from there, we choose A = 3 10 log, B = log +log ε and I = γ log, R I+1/ = 1. If R i+1/ is defined for some i [I, log log log ], then let us define R i+3/ = R i+1/ (1 + K log 3/ ). If R i+1/ is defined for some i [log log log, B 1], then let us define R i+3/ = w i+1/. Similarly, if R i+1/ is defined for some i [A, I], then we define R i 1/ = R i+1/ (1 K ξ ). Now by emma 6(b-d) P(min{k : Z(k + s 0 ) < 3/10 } < min{k : Z(k + s 0 ) > ε} Z(s 0 ) = γ ) is bounded from above by (). An elementary computation shows that () can be made arbitrarily small by choosing γ close to 1. Thus after τ γ, Z(k) reaches ε before reaching with probability close to 1. Finally, emmas 1 and yield that for fixed ε the two particles do not meet after separating by a distance ε and before t with probability close to 1. This proves emma 7. As a consequence of emma 7, we will be able to replace Z 1 (s) and Z (s) with two independent copies after time τ γ = min{s : Z 1 (s) Z (s) > γ }. Then Proposition 6 will easily follow once we establish that (A) τ γ / is negligible and (B) ( Z i (τ γ ) x)/ is negligible. This is what we do in the next two lemmas. emma 8. For any fixed γ < 1 and δ > 0, we have for large enough. P(τ γ < 1+γ ) > 1 δ Proof. By emma 6(a), it is enough to prove P(τ α τ 3/5 > 1+γ ) < δ. We write s 0 = τ 3/5 and if Z(s i ) C m with m > 3 log, then s 10 i+1 is the first time s when either Z(s) C m 1 or Z(s) C m+1. If Z(s i ) C 3 10 log, then s i+1 is the first time s when Z(s) C 3 10 log +1. By emma 6(b), b i := log Z(s i ) can

21 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 1 be approximated by a one dimensional SSRW (reflected at 3 log, absorbed 10 at γ log ) with an error of O( ξ ) at each step. In particular, if t is the smallest i when b i γ log, then P(t > log 3 ) < δ/10 and thus b i, i t can be coupled to a SSRW with an error < δ/5. Now if ζ = (1 γ)/ and l m = #{i < t : b i = m}, then P( m : l m > ζ ) log max m P(l m > ζ ) < δ/10 by the gambler s ruin estimate P(l m > n+1 l m > n) < 1 log 1. If l m < ζ for all m, then τ α < γ log m= 3 10 log ζ i=1 T m,i, where T m,1 is the random time s 1 s 0 if Z(s 0 ) = m and for fixed m, T m,i s are iid. Consequently, we have P(τ α > 1+γ ) < 3δ 10 + (log )ζ max P ( T m,1 > 1+γ ζ log 1 ) < δ m by emma 5 and emma 6(a). Now, with the notation we have τ γ = τ γ () = min{k : Z 1 (k) x > γ or Z (k) x > γ } emma 9. For any fixed γ < 1 and δ > 0, we have for large enough. P( 1+γ < τ γ+3 ) > 1 δ 4 Proof. By emma 1, it is enough to consider the case of one particle. simplified version of emma yields that max Z 1 (s) x < K 1+γ s< 1+γ with probability 1 δ for some K and large. The variant of Proposition 6 (explained in the beginning of Section 6) easily follows from emmas 7, 8 and 9. Since we can neglect an event of small probability, we can assume that all events hold in emmas 7, 8 and 9. Note that by definition, Z1 (s) and Z (s) move independently if their distance is bigger than. Thus for s > τ γ, we can replace ( Z 1 ( τ γ + k)) k=1,...,t and ( Z ( τ γ +k)) k=1,...,t with independent random walks, both of them converging to Z(s) under the proper scaling. Finally, by emmas 8 and 9, τ γ / < δ and ( Z i ( τ γ ) x)/ < δ. Proposition 6 follows in the case of Theorem 1, N =, d =. A

22 6.. Completing the proof of Proposition 6. The case of general N follows from emma 1 and from the case N =. Indeed, emmas 1, 7 and 8 imply that for some γ < 1, P( s [ 1+γ, t ], i, j {1,,..., N} : Z i (s) Z j (s) ) < δ. Furthermore, we have the analogue of emma 9 with τ α replaced by min{k : i < N : Z i (k) x > α }. Whence the case of general N follows the same way as before. The case of dimension d > is simpler than d = as the particles only meet finitely many times. emma 10. In case of Theorem 1, d >, N =, there is some positive p such that Z 1 (s) Z (s) for all k [, t ] with probability at least p. Proof. Consider the process Z(k) = Z (k) Z 1(k) as in Section 6.1. et us fix some small ε > 0. We will show that the probability of the event { Z reaches ε before reaching 1} is bounded away from zero. From this the lemma will follow by the same argument as in d = (cf. the end of the proof of emma 7). Similarly to emma 4, we have (17) 1 d c 0ε 1 < P (Z(k + 1) Z(k) = e) < d + c 0ε assuming Z(k) ε for all unit vectors e Z d and large enough. Now we define a random walk B(k) on Z d with weights. Specifically B(3) = Z(3) and we choose the weights ( w (u,v) = 1 c ) 1ε l, where u 1 = l 1, v 1 = l and c 1 c 0 is a fixed constant. Clearly Z(3) holds with some positive probability. et us write t B,l = min{k > 3 : B(k) 1 = l}, where min = and similarly t Z,l = min{k > 3 : Z(k) = l}. Now we claim that emma 11. Assuming c 1 = c 1 (c 0 ) is large enough, there exists a coupling between the processes B and Z such that B i (k) Z i (k) holds for all k [3, t B,1 t Z,ε ) and i d almost surely. By emma 11, it suffices to prove that (18) P(t B,ε < t B,1 ) is bounded away from zero. This follows from a simple application of the connection between random walks and electrical networks. Note that the weights w (u,v) are bounded away from zero in the ε neighborhood of the origin. In such cases, (18) follows from a standard argument, see e.g. the proof of Theorem in [K14]. In order to complete the proof of the emma 10, it only remains to prove emma 11.

23 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 3 Proof of emma 11. We prove by induction on k. Assume B i (k) Z i (k) for all i d. Case 1: none of the coordinates of B(k) and Z(k) are zero. If B i (k +1) = B i (k) +1, then we define Z(k +1) by Z i (k +1) = Z i (k) +1. We can do so by (17) and by the definition of B: if c 1 is large enough, then (19) P( B i (k + 1) = B i (k) + 1) P( Z i (k + 1) = Z i (k) + 1). Similarly, if Z i (k + 1) = Z i (k) 1, then we define B(k + 1) by B i (k + 1) = B i (k) 1. We can do so as similarly to (19), we have P( Z i (k + 1) = Z i (k) 1) P( B i (k + 1) = B i (k) + 1). The coupling is arbitrary on the remaining set (sometimes we may have to move B and Z in different directions to match the probabilities, i.e. to define a proper coupling). This proves the inductive step for the case when none of the coordinates of B(k) and Z(k) are zero. Case : none of the coordinates of B(k) are zero or for all i with B i (k) = 0, Z i (k) = 0 also holds. The same argument works as in Case 1. Note that the coupling of Case 1 will not work if B i (k) = 0 and Z i (k) 0 as P( B i (k + 1) = B i (k) + 1) d 1 while P( Z i (k + 1) = Z i (k) + 1) (d) 1. et I denote the set of indices i with B i (k) = 0, Z i (k) 0 and let I I be the set of indices i with B i (k) = 0 and Z i (k) = 1. Case 3: Z i (k) for all i I. If B i (k + 1) = B i (k) + 1 with some i I, then we can define Z(k + 1) by changing the ith coordinate (either increasing or decreasing), since we have (0) P(B(k + 1) = B(k) + e i or B(k) e i ) P(Z(k + 1) = Z(k) + e i or Z(k) e i ). The proof of (0) is similar to that of (19): the first line of (0) is w (B(k),B(k)+ei ) + w (B(k),B(k) ei ) v: v B(k) =1 w. (B(k),v) Here the denominator can be bounded from below by ( 1 c ) 1ε B(k) 1 [ ( 1 + (d 1) 1 c )] 1ε, which corresponds to the case when only one coordinate is nonzero (note that we have excluded B(k) = 0 since k < t B,1 ). Combining this estimate with (17) gives (0) assuming that c 1 = c 1 (c 0 ) is large enough. The other coordinates are treated the same way as in Case 1. Case 4: I 0 is an even integer. Consider a perfect matching of I. et (i, j) I be an arbitrary pair. If Z i (k +1) = 0, then we define B(k +1) such that B j (k +1) = 1. We can do so, since P(Z i (k + 1) = 0) (d) 1 and P( B j (k + 1) = 1) d 1. Analogously, if Z j (k + 1) = 0, then we define B(k + 1) such that B i (k + 1) = 1. Then,

24 4 we do the coupling of movements in other directions as discussed in cases 1-3. Finally, we couple the remaining set (including Z i (k + 1) =, Z j (k + 1) = ) arbitrarily. Case 5: I = n is an odd integer. First we consider a matching of n 1 elements of I, and do the coupling described in Case 4. et us denote the remaining index by i. Now we claim that there is some j / I such that Z j (k) B j (k) 1. Indeed, if there was no such j, then d m=1 Z m(k) B m (k) = n would be an odd number, which is a contradiction with the fact that Z(0) = B(0) and both processes move to nearest neighbors at each step. Now if Z j (k +1) = Z j (k) 1, then we define B(k + 1) such that B i (k + 1) = 1. If Z i (k + 1) = 0, we define B(k + 1) by changing the jth coordinate (either decreasing or increasing). These can be done as before. Then, we consider the cases corresponding to other coordinates as discussed in cases 1-3. Finally, we couple the remaining set arbitrarily. Using emma 10, one can easily prove a much simplified version of emmas 7, 8 and 9 with γ, 1+γ, 3+γ 4 replaced by constants K 1 (δ), K (δ), K 3 (δ) This implies Proposition 6 for the case d >, N =. Then the case of general N follows the same way as in d =. Finally, in case of Theorem we have assumed that x 1 0. Then choosing ε < x 1, the proof of the case of Theorem 1 (with the choice R is the constant 1 function) applies. 7. Proof of Theorem 3 and Proposition 4 As in the proof of Theorems 1 and, we prove the weak convergence by showing that the moments converge. The latter one is showed by switching to the dual process. Recalling some notation from Section 5, we define Y from Ỹ (t) the same way as we defined Z from Z(t). The proof consists of two parts. First we consider the case when the number of particles is N = and prove Proposition 7. [ ( ) lim E Y () 1 ( ) T ( )] Y () ( ) T / [ u () (x) ] = 1 Then we can derive an extension of Proposition 7 to arbitrary N. The idea of the proof of Proposition 7 is borrowed from [RY07] (the main difference is in emma 14). The extension to arbitrary N is very similar to the argument in [KMP8, NY16]. et us write P () (A 1 ) = P(Y () 1 ( ) = A 1 ) and P () (A 1, A ) = P(Y () 1 ( ) = A 1, Y () ( ) = A )

25 OCA EQUIIBRIUM IN INHOMOGENEOUS MODES 5 for A 1, A {0, }. The asymptotic hitting probabilities of one particle are given by emma 1. lim P () () A () (x) = 1. Proof. Although this lemma follows from the connection between random walks and electrical networks, we give a direct proof as its extensions will be needed later. Note that by the definition of ω 0, ω and ψ(m), we have for any 1 m 1 (1) ω m 1 ω m 1 + ω m r m 1/ ψ(m) = ω m+1 ω m + ω m+1 r m+1/ ψ(m + 1). et us write Φ(m) = m i=1 ψ(i) for 0 m. Then (1) means that Φ(Y 1 (k)) is a bounded martingale. After the first hitting of either 0 or, this martingale clearly stays constant. Then by the martingale convergence theorem, P () ()Φ() = Φ(Y 1 (0)). Since A () (x) Φ(Y 1 (0))/Φ(), emma 1 follows. Now with the notation Φ(m) = i=m+1 ψ(i) for 0 m our aim is to construct submartingales using and S k := Φ(Y 1 (k))φ(y (k)) + Φ(Y 1 (k))φ(y (k)) T k = max{y 1 (k),y (k)} i=min{y 1 (k),y (k)}+1 ψ(i). emma 13. There exists some constant C only depending on the upper and lower bound of r and ω such that S k + CT k is a submartingale and S k CT k is a supermartingale. Proof. et us compute the conditional expectations with respect to (F k ) k, the filtration generated by the process Y. First, observe that E(S k+1 F k ) = S k if Y 1 (k) Y (k). Next, by definition E(S k+1 Y 1 (k) = Y (k) = i) is equal to 1 r i+1/ r i 1/ +r i+1/ 1 + r i 1/ r i 1/ +r i+1/ 0 p [Φ(i + 1) + Φ(i + 1) ] +p(1 p)[φ(i)φ(i + 1) + Φ(i)Φ(i + 1)] +(1 p) [Φ(i) + Φ(i) ]dbeta(ω i+1 /, ω i /, p) 0 p [Φ(i 1) + Φ(i 1) ] +p(1 p)[φ(i)φ(i 1) + Φ(i)Φ(i 1)] +(1 p) [Φ(i) + Φ(i) ]dbeta(ω i 1 /, ω i /, p)

26 6 where we used the shorthand dbeta(α, β, p) = 1 B(α,β) pα 1 (1 p) β 1 dp. Computing the integrals and using (1) gives E(S k+1 S k Y 1 (k) = Y (k) = i) ( = [ψ(i)] r i+1/ ω i+1 (ω i+1 + ) r i 1/ + r i+1/ (ω i + ω i+1 )(ω i + ω i+1 + ) ) r i 1/ ω i 1 (ω i 1 + ) + r i 1/ + r i+1/ (ω i 1 + ω i )(ω i 1 + ω i + ) Similarly, E(S k+1 S k Y 1 (k) = i, Y (k) = i + 1) is by definition equal to r i 1/ ω i 1 r i 1/ +r i+1/ +r i+3/ [ ψ(i)φ(i + 1) + ψ(i)φ(i + 1)] ω i 1 + ω i r + i+1/ { 1 ( r i 1/ +r i+1/ +r i+3/ p [Φ(i + 1) + Φ(i + 1) ] + r i+3/ ω i+ r i 1/ +r i+1/ +r i+3/ 0 +p(1 p)[φ(i)φ(i + 1) + Φ(i + 1)Φ(i)] +(1 p) [Φ(i) + Φ(i) ] S k ) dbeta(ωi 1, ω i, p) Φ(i) Φ(i) } ω i+1 + ω i+ [Φ(i)ψ(i + ) Φ(i)ψ(i + )]. A similar computation to the previous one gives E(S k+1 S k Y 1 (k) = i, Y (k) = i + 1) = [ψ(i + 1)] r i+1/ r i 1/ + r i+1/ + r i+3/ ω i ω i+1 (ω i + ω i+1 )(ω i + ω i+1 + ). Just like in the case of S k, we have E(T k+1 F k ) = T k if Y 1 (k) Y (k). Furthermore, and r i+1/ E(T k+1 T k Y 1 (k) = Y ω i ω i+1 (k) = i) = [ψ(i + 1)] r i 1/ + r i+1/ (ω i + ω i+1 ), E(T k+1 T k Y 1 (k) = i, Y (k) = i + 1) = [ψ(i + 1)] 1 3 We conclude that there is some positive constant c such that 0 < E(S k+1 S k Y 1 (k) = Y (k)) < 1 c 1/c < E(S k+1 S k Y 1 (k) Y c < E(T k+1 T k Y 1 (k) = Y c < E(T k+1 T k Y 1 (k) Y The lemma follows. (k) = 1) < 0 (k)) (k) = 1) emma 14. lim P () (,)+P () (0,0) [A () (x)] +[1 A () (x)] = 1. ω i ω i+1 (ω i + ω i+1 + ).

ON METEORS, EARTHWORMS AND WIMPS

ON METEORS, EARTHWORMS AND WIMPS ON METEORS, EARTHWORMS AND WIMPS SARA BILLEY, KRZYSZTOF BURDZY, SOUMIK PAL, AND BRUCE E. SAGAN Abstract. We study a model of mass redistribution on a finite graph. We address the questions of convergence

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

Logarithmic scaling of planar random walk s local times

Logarithmic scaling of planar random walk s local times Logarithmic scaling of planar random walk s local times Péter Nándori * and Zeyu Shen ** * Department of Mathematics, University of Maryland ** Courant Institute, New York University October 9, 2015 Abstract

More information

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON

THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON GEORGIAN MATHEMATICAL JOURNAL: Vol. 3, No. 2, 1996, 153-176 THE SKOROKHOD OBLIQUE REFLECTION PROBLEM IN A CONVEX POLYHEDRON M. SHASHIASHVILI Abstract. The Skorokhod oblique reflection problem is studied

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Random Walk in Periodic Environment

Random Walk in Periodic Environment Tdk paper Random Walk in Periodic Environment István Rédl third year BSc student Supervisor: Bálint Vető Department of Stochastics Institute of Mathematics Budapest University of Technology and Economics

More information

Richard F. Bass Krzysztof Burdzy University of Washington

Richard F. Bass Krzysztof Burdzy University of Washington ON DOMAIN MONOTONICITY OF THE NEUMANN HEAT KERNEL Richard F. Bass Krzysztof Burdzy University of Washington Abstract. Some examples are given of convex domains for which domain monotonicity of the Neumann

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

3. The Voter Model. David Aldous. June 20, 2012

3. The Voter Model. David Aldous. June 20, 2012 3. The Voter Model David Aldous June 20, 2012 We now move on to the voter model, which (compared to the averaging model) has a more substantial literature in the finite setting, so what s written here

More information

Connection to Branching Random Walk

Connection to Branching Random Walk Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

RANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM

RANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM RNDOM WLKS IN Z d ND THE DIRICHLET PROBLEM ERIC GUN bstract. Random walks can be used to solve the Dirichlet problem the boundary value problem for harmonic functions. We begin by constructing the random

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals

Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Modern Discrete Probability Branching processes

Modern Discrete Probability Branching processes Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson

More information

Lecture 9 Classification of States

Lecture 9 Classification of States Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction

DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES. 1. Introduction DISTRIBUTION OF THE SUPREMUM LOCATION OF STATIONARY PROCESSES GENNADY SAMORODNITSKY AND YI SHEN Abstract. The location of the unique supremum of a stationary process on an interval does not need to be

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

Lecture 22 Girsanov s Theorem

Lecture 22 Girsanov s Theorem Lecture 22: Girsanov s Theorem of 8 Course: Theory of Probability II Term: Spring 25 Instructor: Gordan Zitkovic Lecture 22 Girsanov s Theorem An example Consider a finite Gaussian random walk X n = n

More information

The coupling method - Simons Counting Complexity Bootcamp, 2016

The coupling method - Simons Counting Complexity Bootcamp, 2016 The coupling method - Simons Counting Complexity Bootcamp, 2016 Nayantara Bhatnagar (University of Delaware) Ivona Bezáková (Rochester Institute of Technology) January 26, 2016 Techniques for bounding

More information

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi

Martingale Problems. Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi s Abhay G. Bhatt Theoretical Statistics and Mathematics Unit Indian Statistical Institute, Delhi Lectures on Probability and Stochastic Processes III Indian Statistical Institute, Kolkata 20 24 November

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Conditional Distributions

Conditional Distributions Conditional Distributions The goal is to provide a general definition of the conditional distribution of Y given X, when (X, Y ) are jointly distributed. Let F be a distribution function on R. Let G(,

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R

Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R Propagating terraces and the dynamics of front-like solutions of reaction-diffusion equations on R P. Poláčik School of Mathematics, University of Minnesota Minneapolis, MN 55455 Abstract We consider semilinear

More information

A Note On Large Deviation Theory and Beyond

A Note On Large Deviation Theory and Beyond A Note On Large Deviation Theory and Beyond Jin Feng In this set of notes, we will develop and explain a whole mathematical theory which can be highly summarized through one simple observation ( ) lim

More information

On Backward Product of Stochastic Matrices

On Backward Product of Stochastic Matrices On Backward Product of Stochastic Matrices Behrouz Touri and Angelia Nedić 1 Abstract We study the ergodicity of backward product of stochastic and doubly stochastic matrices by introducing the concept

More information

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS

SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS The Annals of Applied Probability 2001, Vol. 11, No. 1, 121 181 SPATIAL STRUCTURE IN LOW DIMENSIONS FOR DIFFUSION LIMITED TWO-PARTICLE REACTIONS By Maury Bramson 1 and Joel L. Lebowitz 2 University of

More information

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen

Some SDEs with distributional drift Part I : General calculus. Flandoli, Franco; Russo, Francesco; Wolf, Jochen Title Author(s) Some SDEs with distributional drift Part I : General calculus Flandoli, Franco; Russo, Francesco; Wolf, Jochen Citation Osaka Journal of Mathematics. 4() P.493-P.54 Issue Date 3-6 Text

More information

Mean-field dual of cooperative reproduction

Mean-field dual of cooperative reproduction The mean-field dual of systems with cooperative reproduction joint with Tibor Mach (Prague) A. Sturm (Göttingen) Friday, July 6th, 2018 Poisson construction of Markov processes Let (X t ) t 0 be a continuous-time

More information

Risk-Minimality and Orthogonality of Martingales

Risk-Minimality and Orthogonality of Martingales Risk-Minimality and Orthogonality of Martingales Martin Schweizer Universität Bonn Institut für Angewandte Mathematik Wegelerstraße 6 D 53 Bonn 1 (Stochastics and Stochastics Reports 3 (199, 123 131 2

More information

The concentration of the chromatic number of random graphs

The concentration of the chromatic number of random graphs The concentration of the chromatic number of random graphs Noga Alon Michael Krivelevich Abstract We prove that for every constant δ > 0 the chromatic number of the random graph G(n, p) with p = n 1/2

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Bayesian inference Bayes rule. Monte Carlo integation.

More information

Math 6810 (Probability) Fall Lecture notes

Math 6810 (Probability) Fall Lecture notes Math 6810 (Probability) Fall 2012 Lecture notes Pieter Allaart University of North Texas September 23, 2012 2 Text: Introduction to Stochastic Calculus with Applications, by Fima C. Klebaner (3rd edition),

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

The Skorokhod problem in a time-dependent interval

The Skorokhod problem in a time-dependent interval The Skorokhod problem in a time-dependent interval Krzysztof Burdzy, Weining Kang and Kavita Ramanan University of Washington and Carnegie Mellon University Abstract: We consider the Skorokhod problem

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

The Chromatic Number of Ordered Graphs With Constrained Conflict Graphs

The Chromatic Number of Ordered Graphs With Constrained Conflict Graphs The Chromatic Number of Ordered Graphs With Constrained Conflict Graphs Maria Axenovich and Jonathan Rollin and Torsten Ueckerdt September 3, 016 Abstract An ordered graph G is a graph whose vertex set

More information

Existence and uniqueness of solutions for nonlinear ODEs

Existence and uniqueness of solutions for nonlinear ODEs Chapter 4 Existence and uniqueness of solutions for nonlinear ODEs In this chapter we consider the existence and uniqueness of solutions for the initial value problem for general nonlinear ODEs. Recall

More information

Krzysztof Burdzy Robert Ho lyst Peter March

Krzysztof Burdzy Robert Ho lyst Peter March A FLEMING-VIOT PARTICLE REPRESENTATION OF THE DIRICHLET LAPLACIAN Krzysztof Burdzy Robert Ho lyst Peter March Abstract: We consider a model with a large number N of particles which move according to independent

More information

Exercises. T 2T. e ita φ(t)dt.

Exercises. T 2T. e ita φ(t)dt. Exercises. Set #. Construct an example of a sequence of probability measures P n on R which converge weakly to a probability measure P but so that the first moments m,n = xdp n do not converge to m = xdp.

More information

Reflected Brownian Motion

Reflected Brownian Motion Chapter 6 Reflected Brownian Motion Often we encounter Diffusions in regions with boundary. If the process can reach the boundary from the interior in finite time with positive probability we need to decide

More information

5 Birkhoff s Ergodic Theorem

5 Birkhoff s Ergodic Theorem 5 Birkhoff s Ergodic Theorem Birkhoff s Ergodic Theorem extends the validity of Kolmogorov s strong law to the class of stationary sequences of random variables. Stationary sequences occur naturally even

More information

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A ) 6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined

More information

LECTURE 15: COMPLETENESS AND CONVEXITY

LECTURE 15: COMPLETENESS AND CONVEXITY LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other

More information

Markov Chain Monte Carlo (MCMC)

Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC Dependent Sampling Suppose we wish to sample from a density π, and we can evaluate π as a function but have no means to directly generate a sample. Rejection sampling can

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

DOUBLY PERIODIC SELF-TRANSLATING SURFACES FOR THE MEAN CURVATURE FLOW

DOUBLY PERIODIC SELF-TRANSLATING SURFACES FOR THE MEAN CURVATURE FLOW DOUBLY PERIODIC SELF-TRANSLATING SURFACES FOR THE MEAN CURVATURE FLOW XUAN HIEN NGUYEN Abstract. We construct new examples of self-translating surfaces for the mean curvature flow from a periodic configuration

More information

Inference for Stochastic Processes

Inference for Stochastic Processes Inference for Stochastic Processes Robert L. Wolpert Revised: June 19, 005 Introduction A stochastic process is a family {X t } of real-valued random variables, all defined on the same probability space

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

Bayesian Inference for DSGE Models. Lawrence J. Christiano

Bayesian Inference for DSGE Models. Lawrence J. Christiano Bayesian Inference for DSGE Models Lawrence J. Christiano Outline State space-observer form. convenient for model estimation and many other things. Preliminaries. Probabilities. Maximum Likelihood. Bayesian

More information

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij Weak convergence and Brownian Motion (telegram style notes) P.J.C. Spreij this version: December 8, 2006 1 The space C[0, ) In this section we summarize some facts concerning the space C[0, ) of real

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers. Chapter 3 Duality in Banach Space Modern optimization theory largely centers around the interplay of a normed vector space and its corresponding dual. The notion of duality is important for the following

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES

SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES SUMMARY OF RESULTS ON PATH SPACES AND CONVERGENCE IN DISTRIBUTION FOR STOCHASTIC PROCESSES RUTH J. WILLIAMS October 2, 2017 Department of Mathematics, University of California, San Diego, 9500 Gilman Drive,

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

An introduction to some aspects of functional analysis

An introduction to some aspects of functional analysis An introduction to some aspects of functional analysis Stephen Semmes Rice University Abstract These informal notes deal with some very basic objects in functional analysis, including norms and seminorms

More information

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D 3 Independent Random Variables II: Examples 3.1 Some functions of independent r.v. s. Let X 1, X 2,... be independent r.v. s with the known distributions. Then, one can compute the distribution of a r.v.

More information

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS

ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS ON THE NUMBER OF ALTERNATING PATHS IN BIPARTITE COMPLETE GRAPHS PATRICK BENNETT, ANDRZEJ DUDEK, ELLIOT LAFORGE December 1, 016 Abstract. Let C [r] m be a code such that any two words of C have Hamming

More information

An almost sure invariance principle for additive functionals of Markov chains

An almost sure invariance principle for additive functionals of Markov chains Statistics and Probability Letters 78 2008 854 860 www.elsevier.com/locate/stapro An almost sure invariance principle for additive functionals of Markov chains F. Rassoul-Agha a, T. Seppäläinen b, a Department

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

LogFeller et Ray Knight

LogFeller et Ray Knight LogFeller et Ray Knight Etienne Pardoux joint work with V. Le and A. Wakolbinger Etienne Pardoux (Marseille) MANEGE, 18/1/1 1 / 16 Feller s branching diffusion with logistic growth We consider the diffusion

More information

arxiv: v1 [math.co] 13 May 2016

arxiv: v1 [math.co] 13 May 2016 GENERALISED RAMSEY NUMBERS FOR TWO SETS OF CYCLES MIKAEL HANSSON arxiv:1605.04301v1 [math.co] 13 May 2016 Abstract. We determine several generalised Ramsey numbers for two sets Γ 1 and Γ 2 of cycles, in

More information

Nonlinear Control Systems

Nonlinear Control Systems Nonlinear Control Systems António Pedro Aguiar pedro@isr.ist.utl.pt 3. Fundamental properties IST-DEEC PhD Course http://users.isr.ist.utl.pt/%7epedro/ncs2012/ 2012 1 Example Consider the system ẋ = f

More information

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011

Lecture 5. 1 Chung-Fuchs Theorem. Tel Aviv University Spring 2011 Random Walks and Brownian Motion Tel Aviv University Spring 20 Instructor: Ron Peled Lecture 5 Lecture date: Feb 28, 20 Scribe: Yishai Kohn In today's lecture we return to the Chung-Fuchs theorem regarding

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

arxiv: v1 [math.pr] 1 Jan 2013

arxiv: v1 [math.pr] 1 Jan 2013 The role of dispersal in interacting patches subject to an Allee effect arxiv:1301.0125v1 [math.pr] 1 Jan 2013 1. Introduction N. Lanchier Abstract This article is concerned with a stochastic multi-patch

More information

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i

RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO. X i RENEWAL THEORY STEVEN P. LALLEY UNIVERSITY OF CHICAGO 1. RENEWAL PROCESSES A renewal process is the increasing sequence of random nonnegative numbers S 0,S 1,S 2,... gotten by adding i.i.d. positive random

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1

5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1 5.3 METABOLIC NETWORKS 193 5.3 Metabolic Networks 5.4 Bayesian Networks Let G = (V, E) be a directed acyclic graph. We assume that the vertices i V (1 i n) represent for example genes and correspond to

More information

P(X 0 = j 0,... X nk = j k )

P(X 0 = j 0,... X nk = j k ) Introduction to Probability Example Sheet 3 - Michaelmas 2006 Michael Tehranchi Problem. Let (X n ) n 0 be a homogeneous Markov chain on S with transition matrix P. Given a k N, let Z n = X kn. Prove that

More information

2 A Model, Harmonic Map, Problem

2 A Model, Harmonic Map, Problem ELLIPTIC SYSTEMS JOHN E. HUTCHINSON Department of Mathematics School of Mathematical Sciences, A.N.U. 1 Introduction Elliptic equations model the behaviour of scalar quantities u, such as temperature or

More information

Convergence of Random Walks and Conductance - Draft

Convergence of Random Walks and Conductance - Draft Graphs and Networks Lecture 0 Convergence of Random Walks and Conductance - Draft Daniel A. Spielman October, 03 0. Overview I present a bound on the rate of convergence of random walks in graphs that

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

An algebraic approach to stochastic duality. Cristian Giardinà

An algebraic approach to stochastic duality. Cristian Giardinà An algebraic approach to stochastic duality Cristian Giardinà RAQIS18, Annecy 14 September 2018 Collaboration with Gioia Carinci (Delft) Chiara Franceschini (Modena) Claudio Giberti (Modena) Jorge Kurchan

More information

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively);

(x k ) sequence in F, lim x k = x x F. If F : R n R is a function, level sets and sublevel sets of F are any sets of the form (respectively); STABILITY OF EQUILIBRIA AND LIAPUNOV FUNCTIONS. By topological properties in general we mean qualitative geometric properties (of subsets of R n or of functions in R n ), that is, those that don t depend

More information

An introduction to Mathematical Theory of Control

An introduction to Mathematical Theory of Control An introduction to Mathematical Theory of Control Vasile Staicu University of Aveiro UNICA, May 2018 Vasile Staicu (University of Aveiro) An introduction to Mathematical Theory of Control UNICA, May 2018

More information

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows

New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows New ideas in the non-equilibrium statistical physics and the micro approach to transportation flows Plenary talk on the conference Stochastic and Analytic Methods in Mathematical Physics, Yerevan, Armenia,

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Minimum Power Dominating Sets of Random Cubic Graphs

Minimum Power Dominating Sets of Random Cubic Graphs Minimum Power Dominating Sets of Random Cubic Graphs Liying Kang Dept. of Mathematics, Shanghai University N. Wormald Dept. of Combinatorics & Optimization, University of Waterloo 1 1 55 Outline 1 Introduction

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

1 Stochastic Dynamic Programming

1 Stochastic Dynamic Programming 1 Stochastic Dynamic Programming Formally, a stochastic dynamic program has the same components as a deterministic one; the only modification is to the state transition equation. When events in the future

More information

1 Introduction. 2 Measure theoretic definitions

1 Introduction. 2 Measure theoretic definitions 1 Introduction These notes aim to recall some basic definitions needed for dealing with random variables. Sections to 5 follow mostly the presentation given in chapter two of [1]. Measure theoretic definitions

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance

Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance Technical Appendix for: When Promotions Meet Operations: Cross-Selling and Its Effect on Call-Center Performance In this technical appendix we provide proofs for the various results stated in the manuscript

More information