A NOTE ON THE DECAY OF CORRELATIONS UNDER δ PINNING
|
|
- Pamela Harrington
- 5 years ago
- Views:
Transcription
1 NOTE ON THE DECY OF CORRELTIONS UNDER δ PINNING DMITRY IOFFE ND YVN VELENIK bstract. We prove that for a class of massless φ interface models on Z 2 an introduction of an arbitrarily small pinning self-potential leads to exponential decay of correlation, or, in other words, to creation of mass. MS Subject Classification: (99) 6G6, 6K35, 82B2, 82B24, 82B4 In this note we study a family of effective interface models over Z 2 with formal Hamiltonian H given by () H(φ) = i j V (φ i φ j ), where the summation is over all nearest neighbours i j of Z 2, and the following two assumptions are made on the interaction potential V : (2) V is even and smooth There exists a constant c V, such that c V V (t) c V t R. Remark. No further assumptions on c V are made, and, in fact, we expect that the results of the paper remain true if only the lower bound in (2) is assumed. lso, though we do not stipulate it explicitly at each particular instance, the values of all the positive constants we use below depend on c V. Given a set Z 2 with a finite complement c = Z 2 \, we use P to denote the finite volume Gibbs measure on Ω = R c with the Hamiltonian H and zero boundary conditions on ; (3) P (dφ) = Z() e H(φ) dφ i δ (dφ j ). i c j It is well known that P delocalizes as c Z 2 ; maybe the easiest way to see this is to use the reverse Brascamp-Lieb inequality [2] which implies that the variance of φ under P dominates the corresponding Gaussian variance. If, however, an, essentially arbitrarily small, pinning self-potential is added to H, then the situation radically changes, and the infinite volume Gibbs state exists in the usual sense. This phenomenon has been first worked out in the Gaussian case (c V = ) in [4]. Our main reference [3] contains a proof of the localization for a fairly general class of interactions and self-potentials. In this note we prove that in the case of the family of random interfaces as in (), the delocalization/localization transition is sharp in the sense that it always comes together with the exponential decay of correlations, or, using the language of a more physically oriented literature, with the creation of mass. For simplicity, but also in order to give a cleaner exposition of otherwise more general renormalization ideas behind the proof, we consider here only the case of the so called δ-pinning, thereby generalizing recent results of [] on purely Gaussian fields (that is again c V = ): Supported by DFG grant De 663/2-. Date: May 3, 2.
2 2 DMITRY IOFFE ND YVN VELENIK Given a box Λ N = [ N,..., N] 2 Z 2 and a number J R (which characterizes the strength of the pinning) we define the following measure ˆP N on R Λ N : (4) ˆPN (dφ) = Ẑ N e H(φ) i Λ N ( dφi + e J δ (dφ i ) ) j Z 2 \Λ N δ (dφ j ). Notice that the case J = corresponds to the original measure on R Λ N with the Hamiltonian (), which delocalizes as N. Theorem. For every J R there exists an exponent (mass) m = m(j) > and a constant c = c (J) <, such that (5) covˆpn ( φi ; φ j ) c e m i j uniformly in N and in i, j Z 2. Of course, there is nothing to prove if either i or j lies outside of Λ N. In fact, the sub-index N is superfluous - all the estimates we use and obtain simply do not depend on a particular Λ N, and the only reason we need it is to make the definitions mathematically meaningful. From now on we shall drop the sub-index N from the notation. right way to think about (4) is as of the joint distribution of the field of random interface heights {φ i } i Z 2 and the random dry set ; = { i Z 2 : φ i = }. Integrating out all the height variables φ in (4) we arrive to the following probability distribution for ; (6) ˆP ( = ) = ν() = Ẑ ej Z() = e J Z() D ej D Z(D), where the partition function Z() is the same as in (3). Using the probabilistic weights {ν()} one can rewrite ˆP as the convex combination, (7) ˆP( ) = ν()p ( ). Since under each P the distribution of φ i is symmetric for every i Z 2, this gives rise to the following decomposition of the covariances: ( ) (8) covˆp φi ; φ j = ν() φ i ; φ j. t this point we shall utilize the random walk representation of φ i ; φ j which has been first developed in the PDE context in [5]. We follow the approach of [2], where the Helffer-Sjöstrand representation was put on probabilistic tracks: One constructs a stochastic process ( Φ(t), X(t) ), where: Φ( ) is a diffusion on R c with invariant measure P. Given a trajectory φ( ) of the process Φ, X(t) is an, in general inhomogeneous, transient random walk on c c Z 2 with life-time τ = inf{t : X(t) }, and time-dependent jump rates { V ( φ i (t) φ j (t) ), if i j (9) a(i, j; t) =, otherwise
3 Let us use Ei,φ to denote the law of ( X(t), Φ(t) ) starting from the point (i, φ) c R c. Then ([5],[2]), τ () φ i, φ j = Ei,φ I {X(s)=j} ds. Substituting the latter expression into (8), ( ) τ () covˆp φi ; φ j = ν() Ei,φ I {X(s)=j} ds. It is very easy now to explain the logic behind the proof of Theorem : The expression E i,φ τ I {X(s)=j} ds describes the time spent by the random walk X( ) starting at i in the site j before being killed upon entering the dry set which, for the purpose, could be considered as a random killing obstacle. In order to prove that this time is exponentially (in i j ) small one needs an appropriate density estimate on and a certain path-wise control on the exit distributions of X( ). In the Gaussian case considered in [], X( ) happens to be just the simple random walk on Z 2 which is completely decoupled from the diffusion part Φ( ), and, thus, behaving independently of and the initial condition φ R c. This lead in [] to a resummation argument, which substantially facilitated the matter. One of the main difficulties in the non-gaussian case we consider here is the dependence of the distribution of X( ) on the realization of the dry set and on the sample path of the diffusion Φ. We still have very little to say about this dependence. However, due to the basic assumption (2) on the interaction potential V, the jump rates a(i, j; t) in (9) are uniformly bounded above and below: (2) c V a(i, j; t) c V. In particular one always has a rough control over probabilities of hitting distributions. For example, if the random walk X enters a box B l of linear size l which is known to contain a dry site; it would be convenient to call such a box dirty, then the probability that X hits this site (and consequently dies there) before leaving B l should be bounded below by some positive number p = p(l) >. Thus if the realisation of the random dry set is such that on its way from i to j the walk X cannot avoid visiting less than ɛ i j disjoint dirty l-boxes, the probability that it eventually reaches j before being killed should be bounded above by something like ( p(l)) ɛ i j. Proposition 2 below makes this computation precise. The crux of the matter, however, is to ensure that on a certain finite l-scale the density of the dirty l-boxes is so high, that only with exponentially small probabilities the realization of enables an ɛ-clean passage from i to j. statement of this sort is given in Proposition. Once the renormalization approach sketched above is accepted as the strategy of the proof, the first drive of an associative thinking is to try to compare the distribution of on different l-scales with, say, independent Bernoulli percolation or other known models with controllable decay of connectivities. This we have tried and failed, and, at least in the case of Z 2, such a comparison is unlikely. The relevant statistical properties of the random dry set on various finite length scales are captured in the following estimate which generalizes the key Proposition 5. in [3] 3
4 4 DMITRY IOFFE ND YVN VELENIK Theorem 2. For each J R there exists a number R = R(J) < and an exponent η = η(j) >, such that whenever a finite set B Z 2 admits a decomposition n (3) B = into connected disjoint components B,..., B n with (4) diam ( B l ) R; l =,..., n, the following exponential upper bound on having all of B clean of dry points holds: (5) ν() e η B. B= We relegate the proof of Theorem 2 to the end of the paper, and, assuming for the moment its validity, directly proceed to the proof of the mass-generation claim of Theorem. Proof of Theorem : The number R = R(J) which appears in the basic Theorem 2 sets up the stage for the finite scale renormalization analysis of the random dry set. Let us pick a number l > R; l N, and define the renormalized lattice Z 2 l l= B l = (2l + )Z 2. To distinguish between the sets on the original lattice Z 2 and those on the renormalized one Z 2 l we shall always mark the latter by the super-index l. For example B l (x, r) stands for the Z 2 l lattice box centered at x Z 2 l ; B l (x, r) = { y Z 2 l : x y lr }. Let us define Γ l (r) as the set of all Z 2 l -nearest neighbour self-avoiding lattice paths leading from the origin to the boundary B l (x, r). With each γ l Γ l (r) we associate a connected chain γ l of l-blocks on the original lattice Z 2 ; γ l = B(x, l). x γ l Let us fix a number ɛ (, ). We say that a path γ l Γ l is (r, ɛ)-clean in Z 2, if { } # x γ l : B(x, l) < ɛr. Similarly, we say that a set Z 2 is (r, ɛ)-clean if there exists a path γ l Γ l (r) which is (r, ɛ)-clean for. Otherwise, we shall call (r, ɛ) dirty. Proposition. For each ɛ (, ) there exist a number l = l (ɛ, J) < and a radius r = r (ɛ), such that for every choice of l l ; ν () e c2(ɛ,l)r, is (r,ɛ) clean uniformly in r r, where c 2 (ɛ, l) diverges (as l 2 ) with l. Proof: The condition on r (ɛ) is a semantic one - the only thing we want is to ensure that r > [ɛr]. Let us estimate the probability of the event { is (r, ɛ) clean} as follows: (6) ν () ν(). is (r,ɛ) clean k=r γ l Γ l : γ l =k :γ l is (r,ɛ) clean in Each path γ l = (, x,..., x k ); γ l Γ l, which is (r, ɛ)-clean in contains at most [ɛr] vertices x i,..., x im ; M [ɛr], such that the corresponding l blocks have a non-empty intersection with ; B(x i, l) ; i =,..., M.
5 Whatever happens, for a path γ l of length k there are at most 2 k (in fact much less due to the restriction M [ɛr]) possible ways to choose a sub-family γ l dirty ; γ l dirty = M B(x i, l), i= of dirty block along γ l. On the other hand, fixing both γ l and its dirty part γ dirty l, we can use Theorem 2 to obtain (7) ν() exp{ η γ l \ γ dirty l } e η(k [ɛr])l2 γ l \ γ dirty l = We, thus, conclude, that for any k r and for each γ l Γ l with γ l = k, ν() e η(j)l2 (k [ɛr])+k log 2. : γ l is (r,ɛ) clean in Using the above estimate together with the trivial bound; { } # γ l Γ l : γ l = k 4 k, to perform the summation in (6) we arrive at the claim of Proposition. Nothing in the above argument depends on the fact that the box B(, rl) is centered at the origin. Without any loss of generality we shall prove (5) only for the case i =. Let us fix l and ɛ as in the statement of Proposition. For each j with j > rl we use () and estimate: (8) ( ) τ covˆp φ ; φ j ν () max φ E,φ I {X(s)=j} ds is (r,ɛ) dirty τ + is (r,ɛ) clean ν () E,φ I {X(s)=j} ds. Let us consider the first term in the RHS of (8). Using τ rl to denote the exit time from B(, rl), this term could be further bounded above as (9) max max ( ) τb is (r,ɛ) dirty φ E,φ τ > τ rl ν(b) max ψ EB j,ψ I {X(s)=j} ds. B It is convenient to estimate the above expression in a complete generality of time dependent random walks with bounded jump rates a(i, j; t): Let X(t) be the time-inhomogeneous Markov process with transition rates as in (2). It is always possible to homogenize it, and to consider X(t) = (X(t), t). We shall use Ẽ(i,t) to denote the law of X with space-time starting point (i, t) Z 2 R. The B(, rl) box is decomposed to the disjoint union of sub-blocks on the l-scale as: B(, rl) = x B l (,r)b(x, l). To a generic point i B(, rl) we associate an l-block B l (i) according to the following rule: B l (i) = B(x, l) if i B(x, l) for some x Z 2 l. Given a (r, ɛ)-dirty set Z 2, let us call a block B(x, l); x Z 2 l, dirty if B(x, l). 5
6 6 DMITRY IOFFE ND YVN VELENIK We introduce now the following family of stopping times for the process X(t): T = inf t {B l(x(t)) is dirty}. S = inf t T {B l (X(t)) B l (X(T ))}. T 2 = inf t S {B l (X(t)) is dirty}... S n = inf t T n {B l (X(t)) B l (X(T n ))}.... The condition of being (r, ɛ)-dirty is readily translatable under P to the sure event {τ rl > T ɛr }. Consequently, if, as before, we use τ to denote the hitting time of the set, P (,) (τ > τ rl ) P (,) (τ > T ɛr ) = Ẽ(,)Ẽ X(T ) I τ >S...Ẽ X(Tɛr) I τ >S ɛr. We claim that each of the ɛr terms in the above product admits an upper bound of the form ( ) 2l (2) 3c 2 V +. uniformly in all Markov chains with bounded rates condition (2) and (which is the same) in all possible values of above stopping times. Indeed let B l be a box of side length l, and i, k B l. Then one strategy for a random walk starting at i to hit k before leaving B l is to march to k directly along some prescribed unambiguous trajectory, say first horizontally and then vertically. Clearly if one pulls down the rates along such a trajectory to the minimum value /c V and pushes the rates leading out of this trajectory to the maximal value c V, then the probability to follow the trajectory itself only decreases, but to an exactly computable value ( ) i k 3c 2 V +, where the power i k, of course, corresponds to the number of steps along the trajectory. Hence (2). s a result: Proposition 2. Uniformly in r and in (r, ɛ)-dirty sets, Finally, (2) where d(j, B) = inf{ j i : i B}. B max φ E,φ (τ > τ rl ) e c 3rl. τb ν(b) max φ EB j,φ I {X(s)=j} ds τb = ν(b) max φ EB j,φ I {X(s)=j} ds, k= B:d(j,B)=k
7 Proceeding as in the proof of Proposition 2, we readily obtain that there exists a number M = M(c V ) <, such that; (22) max φ EB j,φ τb I {X(s)=j} ds M k, whenever d(j, B) = k. On the other hand, by Theorem 2, (23) ν(b) e ηk2, B: d(j,b)=k as soon as k > R. Therefore, the sum in (2) converges, thereby concluding the analysis of the first term. Let us consider the second term in (8). It can be easily bounded by τ ν () E,φ I {X(s)=j} ds (24) k is (r,ɛ) clean d(j,)=k k max : d(j,)=k E,φ τ I {X(s)=j} ds ( B is (r,ɛ) clean ν (B) C: d(j,c)=k ν (C)) /2. The last two sums in (24) have been bounded in Proposition and in (23) respectively. Using (22), the remaining sum over k can be dealt with as before. The proof of Theorem is, thereby, concluded. Proof of Theorem 2: Let us start by introducing some additional notation: Given a finite set B Z 2 with the decomposition (3) into the disjoint union of connected components B,..., B n we say that another set is a dry neighbour of B; D B, if B = but B l ; l =,..., n. Proposition 3. There exists a constant c 4 = c 4 (J), such that for every finite B Z 2, (25) ν() e c4 B. D B The proof of Proposition 3 relies on the following two basic estimates which have been proven in [3]: () There exists a number M = M(J) and a constant c 5 = c 5 (J), such that, (26) inf D B (27) C B J C Z( C) e Z() whenever B is connected and diam(b) M. (2) Let and i Z 2 \. Then, Z( {i}) Z() e c 5 B, c 6 (J) log d(i, ). The above estimates are linked to the claim of Proposition 3 in the following way: ν() Z( inf... n C l) e J n C l. D B Z() D B C B C n B n If, for some m [,..., n ], we regroup B as B = B + B = {B,..., B m } {B m+,..., B n }, 7
8 8 DMITRY IOFFE ND YVN VELENIK then, since m (28) C l always belongs to D n m+ B l, we obtain the following decoupling estimate: inf D B Z(... n C l) e J n C l Z() C B C n B n Z( inf... m C l) e J m C l D + Z() B C B C m B m Z( inf... n m+ C l) D Z() B C m+ B m+ C n B n e J n m+ C l. In particular, the claim (25) directly follows from the estimate (26) whenever diam(b l ) > M for each l =,..., n. In fact, in view of (26) and (28), it remains to study only the case when all connected components of B are small; diam(b l ) < M; l =,..., n. In the latter situation, however, we can use (27) and estimate; for every l; D Bl inf D B and (25) follows. Z( C l ) Z() and C l B l. Therefore, Z(... n C l) Z() C B C n B n ( ) Cl c 6, log(2m + ) e J n l= C l ( ) Bl n c 6 +, log(2m + ) Remark 2. One could hope to deduce from Proposition 3 the claim of Theorem 2 even without the additional assumption (4), but we were not able to do so. We would like to stress, however, that within the framework of the renormalization approach we try to develop there is absolutely no point to relax (4). The rest of the proof is an adaptation of the ideas of [3] to the case of multiply connected sets: First of all, for any finite D Z 2 let us denote its k-enlargement D (k) as D (k) = { i Z 2 : d(i, D) k }. ssume now that B = n B l is as in the assumptions of Theorem 2, that is the diameter of each connected component B l of B is bounded below, diam(b l ) R; i =,..., n. We have to show that the bound (5) holds uniformly in such B-s as soon as R is chosen large enough. Let us say that a tuple k = (k,..., k n ) of n natural numbers is B-admissible if: k N (no restriction). either k 2 =, or the sets B (k ) and B (k 2) 2 are disjoint. either k 3 =, or the set B (k 3) 3 is disjoint from B (k ) B (k 2) either k n =, or the set B n (kn) is disjoint from n B (k l) l. For any B-admissible tuple k we set B (k) = n This construction enjoys the following two properties: B (k l) l. In the Gaussian case, the result holds without this assumption, as was shown in [] using reflection positivity.
9 9 () For any B = there is the unique B-admissible tuple k, such that, D B (k). Indeed, this tuple k = (k,..., k n ) can be constructed in the following way: k k 2 k n = max { k : B (k) = } = max { k > : B (k) 2 ( B (k ) ) = } = max { k > : B (k) n ( n B (k l) l ) = } with the convention that the maximum over an empty set equals zero. (2) For any B-admissible tuple k = (k,..., k n ); B (k) n B + k l. This follows directly from the definition of the B-admissibility. Using Proposition 3 we, thereby, obtain: ν() = ν() B= B admissible k B admissible k D (k) B e c 4( B + k l ) e c 4 B ( e c 4 ) n. By the assumption (4), n B /R. Thus it remains to choose R = R(J) so large that, and (5) follows. η(j) = c 4 (J) + log( e c4(j) ) R >, References [] E. Bolthausen, D. Brydges (998), Gaussian Surface Pinned by a Weak Potential, preprint. [2] J.-D. Deuschel, G. Giacomin, D. Ioffe (998), Concentration results for a class of effective interface models, to appear in Probab. Theory Related fields. [3] J.-D. Deuschel, Y. Velenik (998), Non-Gaussian surface pinned by a weak potential, preprint. [4] F. Dunlop, J. Magnen, V. Rivasseau, P. Roche (992), Pinning of an Interface by a Weak Potential, J.Stat.Phys. 87, [5] B. Helffer and J. Sjöstrand (994), On the correlation for Kac like models in the convex case, J.Stat.Phys. 74, WIS, Mohrenstr. 39 D-7 Berlin, Germany and Faculty of Industrial Engineering, Technion, Haifa 32, Israel Fachbereich Mathematik, Sekt. M 7-4, TU-Berlin, Strasse des 7 Juni 36, D-623 Berlin, Germany
NON-GAUSSIAN SURFACE PINNED BY A WEAK POTENTIAL
1 NON-GAUSSIAN SURFACE PINNED BY A WEAK POTENTIAL J.-D. DEUSCHEL AND Y. VELENIK Abstract. We consider a model of a two-dimensional interface of the (continuous) SOS type, with finite-range, strictly convex
More informationGradient Gibbs measures with disorder
Gradient Gibbs measures with disorder Codina Cotar University College London April 16, 2015, Providence Partly based on joint works with Christof Külske Outline 1 The model 2 Questions 3 Known results
More informationGradient interfaces with and without disorder
Gradient interfaces with and without disorder Codina Cotar University College London September 09, 2014, Toronto Outline 1 Physics motivation Example 1: Elasticity Recap-Gaussian Measure Example 2: Effective
More informationUniqueness of random gradient states
p. 1 Uniqueness of random gradient states Codina Cotar (Based on joint work with Christof Külske) p. 2 Model Interface transition region that separates different phases p. 2 Model Interface transition
More informationCritical behavior of the massless free field at the depinning transition
Critical behavior of the massless free field at the depinning transition Erwin Bolthausen Yvan Velenik August 27, 200 Abstract We consider the d-dimensional massless free field localized by a δ-pinning
More information2. Transience and Recurrence
Virtual Laboratories > 15. Markov Chains > 1 2 3 4 5 6 7 8 9 10 11 12 2. Transience and Recurrence The study of Markov chains, particularly the limiting behavior, depends critically on the random times
More informationLECTURE 2: LOCAL TIME FOR BROWNIAN MOTION
LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION We will define local time for one-dimensional Brownian motion, and deduce some of its properties. We will then use the generalized Ray-Knight theorem proved in
More informationUniversal examples. Chapter The Bernoulli process
Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationThe parabolic Anderson model on Z d with time-dependent potential: Frank s works
Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),
More informationLecture 9 Classification of States
Lecture 9: Classification of States of 27 Course: M32K Intro to Stochastic Processes Term: Fall 204 Instructor: Gordan Zitkovic Lecture 9 Classification of States There will be a lot of definitions and
More informationSolving the Poisson Disorder Problem
Advances in Finance and Stochastics: Essays in Honour of Dieter Sondermann, Springer-Verlag, 22, (295-32) Research Report No. 49, 2, Dept. Theoret. Statist. Aarhus Solving the Poisson Disorder Problem
More informationRenormalization Group: non perturbative aspects and applications in statistical and solid state physics.
Renormalization Group: non perturbative aspects and applications in statistical and solid state physics. Bertrand Delamotte Saclay, march 3, 2009 Introduction Field theory: - infinitely many degrees of
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationEstimates for probabilities of independent events and infinite series
Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences
More informationLecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1
Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).
More informationRandom Times and Their Properties
Chapter 6 Random Times and Their Properties Section 6.1 recalls the definition of a filtration (a growing collection of σ-fields) and of stopping times (basically, measurable random times). Section 6.2
More informationNon-existence of random gradient Gibbs measures in continuous interface models in d = 2.
Non-existence of random gradient Gibbs measures in continuous interface models in d = 2. Aernout C.D. van Enter and Christof Külske March 26, 2007 Abstract We consider statistical mechanics models of continuous
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationarxiv: v2 [math.pr] 26 Jun 2017
Existence of an unbounded vacant set for subcritical continuum percolation arxiv:1706.03053v2 math.pr 26 Jun 2017 Daniel Ahlberg, Vincent Tassion and Augusto Teixeira Abstract We consider the Poisson Boolean
More informationJUHA KINNUNEN. Harmonic Analysis
JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes
More informationTree sets. Reinhard Diestel
1 Tree sets Reinhard Diestel Abstract We study an abstract notion of tree structure which generalizes treedecompositions of graphs and matroids. Unlike tree-decompositions, which are too closely linked
More information6 Markov Chain Monte Carlo (MCMC)
6 Markov Chain Monte Carlo (MCMC) The underlying idea in MCMC is to replace the iid samples of basic MC methods, with dependent samples from an ergodic Markov chain, whose limiting (stationary) distribution
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationRANDOM WALKS IN ONE DIMENSION
RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in
More informationLatent voter model on random regular graphs
Latent voter model on random regular graphs Shirshendu Chatterjee Cornell University (visiting Duke U.) Work in progress with Rick Durrett April 25, 2011 Outline Definition of voter model and duality with
More informationA Note on the Central Limit Theorem for a Class of Linear Systems 1
A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.
More informationLattice spin models: Crash course
Chapter 1 Lattice spin models: Crash course 1.1 Basic setup Here we will discuss the basic setup of the models to which we will direct our attention throughout this course. The basic ingredients are as
More informationP i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=
2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]
More informationControl, Stabilization and Numerics for Partial Differential Equations
Paris-Sud, Orsay, December 06 Control, Stabilization and Numerics for Partial Differential Equations Enrique Zuazua Universidad Autónoma 28049 Madrid, Spain enrique.zuazua@uam.es http://www.uam.es/enrique.zuazua
More informationCONSTRAINED PERCOLATION ON Z 2
CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability
More informationOrdinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore
Ordinary Differential Equations Prof. A. K. Nandakumaran Department of Mathematics Indian Institute of Science Bangalore Module - 3 Lecture - 10 First Order Linear Equations (Refer Slide Time: 00:33) Welcome
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationProving languages to be nonregular
Proving languages to be nonregular We already know that there exist languages A Σ that are nonregular, for any choice of an alphabet Σ. This is because there are uncountably many languages in total and
More informationDR.RUPNATHJI( DR.RUPAK NATH )
Contents 1 Sets 1 2 The Real Numbers 9 3 Sequences 29 4 Series 59 5 Functions 81 6 Power Series 105 7 The elementary functions 111 Chapter 1 Sets It is very convenient to introduce some notation and terminology
More informationConvergence in shape of Steiner symmetrized line segments. Arthur Korneychuk
Convergence in shape of Steiner symmetrized line segments by Arthur Korneychuk A thesis submitted in conformity with the requirements for the degree of Master of Science Graduate Department of Mathematics
More informationBALANCING GAUSSIAN VECTORS. 1. Introduction
BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors
More informationON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS
Bendikov, A. and Saloff-Coste, L. Osaka J. Math. 4 (5), 677 7 ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS ALEXANDER BENDIKOV and LAURENT SALOFF-COSTE (Received March 4, 4)
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationThe cost/reward formula has two specific widely used applications:
Applications of Absorption Probability and Accumulated Cost/Reward Formulas for FDMC Friday, October 21, 2011 2:28 PM No class next week. No office hours either. Next class will be 11/01. The cost/reward
More informationPHYSICS 653 HOMEWORK 1 P.1. Problems 1: Random walks (additional problems)
PHYSICS 653 HOMEWORK 1 P.1 Problems 1: Random walks (additional problems) 1-2. Generating function (Lec. 1.1/1.2, random walks) Not assigned 27. Note: any corrections noted in 23 have not been made. This
More informationLecture 21 Representations of Martingales
Lecture 21: Representations of Martingales 1 of 11 Course: Theory of Probability II Term: Spring 215 Instructor: Gordan Zitkovic Lecture 21 Representations of Martingales Right-continuous inverses Let
More informationSharpness of second moment criteria for branching and tree-indexed processes
Sharpness of second moment criteria for branching and tree-indexed processes Robin Pemantle 1, 2 ABSTRACT: A class of branching processes in varying environments is exhibited which become extinct almost
More informationDecomposition of Riesz frames and wavelets into a finite union of linearly independent sets
Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More information1 Lyapunov theory of stability
M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability
More informationSample Spaces, Random Variables
Sample Spaces, Random Variables Moulinath Banerjee University of Michigan August 3, 22 Probabilities In talking about probabilities, the fundamental object is Ω, the sample space. (elements) in Ω are denoted
More information4 Sums of Independent Random Variables
4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables
More informationThe length-ω 1 open game quantifier propagates scales
The length-ω 1 open game quantifier propagates scales John R. Steel June 5, 2006 0 Introduction We shall call a set T an ω 1 -tree if only if T α
More informationThere is no isolated interface edge in very supercritical percolation
There is no isolated interface edge in very supercritical percolation arxiv:1811.12368v1 [math.pr] 29 Nov 2018 Raphaël Cerf November 30, 2018 Abstract Zhou Wei We consider the Bernoulli bond percolation
More informationConnection to Branching Random Walk
Lecture 7 Connection to Branching Random Walk The aim of this lecture is to prepare the grounds for the proof of tightness of the maximum of the DGFF. We will begin with a recount of the so called Dekking-Host
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationA Note on Smaller Fractional Helly Numbers
A Note on Smaller Fractional Helly Numbers Rom Pinchasi October 4, 204 Abstract Let F be a family of geometric objects in R d such that the complexity (number of faces of all dimensions on the boundary
More informationDisintegration into conditional measures: Rokhlin s theorem
Disintegration into conditional measures: Rokhlin s theorem Let Z be a compact metric space, µ be a Borel probability measure on Z, and P be a partition of Z into measurable subsets. Let π : Z P be the
More informationat time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))
Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time
More informationAhlswede Khachatrian Theorems: Weighted, Infinite, and Hamming
Ahlswede Khachatrian Theorems: Weighted, Infinite, and Hamming Yuval Filmus April 4, 2017 Abstract The seminal complete intersection theorem of Ahlswede and Khachatrian gives the maximum cardinality of
More information1 The Observability Canonical Form
NONLINEAR OBSERVERS AND SEPARATION PRINCIPLE 1 The Observability Canonical Form In this Chapter we discuss the design of observers for nonlinear systems modelled by equations of the form ẋ = f(x, u) (1)
More informationSMSTC (2007/08) Probability.
SMSTC (27/8) Probability www.smstc.ac.uk Contents 12 Markov chains in continuous time 12 1 12.1 Markov property and the Kolmogorov equations.................... 12 2 12.1.1 Finite state space.................................
More informationRandom geometric analysis of the 2d Ising model
Random geometric analysis of the 2d Ising model Hans-Otto Georgii Bologna, February 2001 Plan: Foundations 1. Gibbs measures 2. Stochastic order 3. Percolation The Ising model 4. Random clusters and phase
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More informationPhase Transitions. µ a (P c (T ), T ) µ b (P c (T ), T ), (3) µ a (P, T c (P )) µ b (P, T c (P )). (4)
Phase Transitions A homogeneous equilibrium state of matter is the most natural one, given the fact that the interparticle interactions are translationally invariant. Nevertheless there is no contradiction
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationMalliavin Calculus in Finance
Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x
More informationPart I Stochastic variables and Markov chains
Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)
More informationWiener Measure and Brownian Motion
Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u
More informationLIMIT THEORY FOR PLANAR GILBERT TESSELLATIONS
PROBABILITY AND MATHEMATICAL STATISTICS Vol. 31, Fasc. 1 (2011), pp. 149 160 LIMIT THEORY FOR PLANAR GILBERT TESSELLATIONS BY TOMASZ S C H R E I B E R (TORUŃ) AND NATALIA S O JA (TORUŃ) Abstract. A Gilbert
More informationSpectral Clustering. Guokun Lai 2016/10
Spectral Clustering Guokun Lai 2016/10 1 / 37 Organization Graph Cut Fundamental Limitations of Spectral Clustering Ng 2002 paper (if we have time) 2 / 37 Notation We define a undirected weighted graph
More informationScaling exponents for certain 1+1 dimensional directed polymers
Scaling exponents for certain 1+1 dimensional directed polymers Timo Seppäläinen Department of Mathematics University of Wisconsin-Madison 2010 Scaling for a polymer 1/29 1 Introduction 2 KPZ equation
More informationON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT
Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department
More informationL n = l n (π n ) = length of a longest increasing subsequence of π n.
Longest increasing subsequences π n : permutation of 1,2,...,n. L n = l n (π n ) = length of a longest increasing subsequence of π n. Example: π n = (π n (1),..., π n (n)) = (7, 2, 8, 1, 3, 4, 10, 6, 9,
More informationSzemerédi-Trotter theorem and applications
Szemerédi-Trotter theorem and applications M. Rudnev December 6, 2004 The theorem Abstract These notes cover the material of two Applied post-graduate lectures in Bristol, 2004. Szemerédi-Trotter theorem
More information( ε > 0) ( N N) ( n N) : n > N a n A < ε.
CHAPTER 1. LIMITS OF SEQUENCES 1.1. An Ad-hoc Definition. In mathematical analysis, the concept of limits is of paramount importance, in view of the fact that many basic objects of study such as derivatives,
More informationRANDOM WALKS IN Z d AND THE DIRICHLET PROBLEM
RNDOM WLKS IN Z d ND THE DIRICHLET PROBLEM ERIC GUN bstract. Random walks can be used to solve the Dirichlet problem the boundary value problem for harmonic functions. We begin by constructing the random
More informationOrdinary Differential Equation Introduction and Preliminaries
Ordinary Differential Equation Introduction and Preliminaries There are many branches of science and engineering where differential equations arise naturally. Now days, it finds applications in many areas
More informationfür Mathematik in den Naturwissenschaften Leipzig
ŠܹÈÐ Ò ¹ÁÒ Ø ØÙØ für athematik in den Naturwissenschaften Leipzig Strict convexity of the free energy for non-convex gradient models at moderate β by Codina Cotar, Jean-Dominique Deuschel, and Stefan
More informationSTAT 7032 Probability Spring Wlodek Bryc
STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,
More informationRandom graphs: Random geometric graphs
Random graphs: Random geometric graphs Mathew Penrose (University of Bath) Oberwolfach workshop Stochastic analysis for Poisson point processes February 2013 athew Penrose (Bath), Oberwolfach February
More informationIsomorphisms between pattern classes
Journal of Combinatorics olume 0, Number 0, 1 8, 0000 Isomorphisms between pattern classes M. H. Albert, M. D. Atkinson and Anders Claesson Isomorphisms φ : A B between pattern classes are considered.
More informationCapacitary inequalities in discrete setting and application to metastable Markov chains. André Schlichting
Capacitary inequalities in discrete setting and application to metastable Markov chains André Schlichting Institute for Applied Mathematics, University of Bonn joint work with M. Slowik (TU Berlin) 12
More informationCS 246 Review of Proof Techniques and Probability 01/14/19
Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we
More information1. Introductory Examples
1. Introductory Examples We introduce the concept of the deterministic and stochastic simulation methods. Two problems are provided to explain the methods: the percolation problem, providing an example
More informationΩ = e E d {0, 1} θ(p) = P( C = ) So θ(p) is the probability that the origin belongs to an infinite cluster. It is trivial that.
2 Percolation There is a vast literature on percolation. For the reader who wants more than we give here, there is an entire book: Percolation, by Geoffrey Grimmett. A good account of the recent spectacular
More informationON THE ISING MODEL WITH RANDOM BOUNDARY CONDITION
ON THE ISING MODEL WITH RANDOM BOUNDARY CONDITION A. C. D. VAN ENTER, K. NETOČNÝ, AND H. G. SCHAAP ABSTRACT. The infinite-volume limit behavior of the 2d Ising model under possibly strong random boundary
More informationSpanning and Independence Properties of Finite Frames
Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames
More informationSTOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES
STOCHASTIC PERRON S METHOD AND VERIFICATION WITHOUT SMOOTHNESS USING VISCOSITY COMPARISON: OBSTACLE PROBLEMS AND DYNKIN GAMES ERHAN BAYRAKTAR AND MIHAI SÎRBU Abstract. We adapt the Stochastic Perron s
More informationarxiv:math/ v1 [math.pr] 24 Apr 2003
ICM 2002 Vol. III 1 3 arxiv:math/0304374v1 [math.pr] 24 Apr 2003 Random Walks in Random Environments Ofer Zeitouni Abstract Random walks in random environments (RWRE s) have been a source of surprising
More informationClairvoyant scheduling of random walks
Clairvoyant scheduling of random walks Péter Gács Boston University April 25, 2008 Péter Gács (BU) Clairvoyant demon April 25, 2008 1 / 65 Introduction The clairvoyant demon problem 2 1 Y : WAIT 0 X, Y
More informationLectures on Elementary Probability. William G. Faris
Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationBasic Definitions: Indexed Collections and Random Functions
Chapter 1 Basic Definitions: Indexed Collections and Random Functions Section 1.1 introduces stochastic processes as indexed collections of random variables. Section 1.2 builds the necessary machinery
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More informationThe small ball property in Banach spaces (quantitative results)
The small ball property in Banach spaces (quantitative results) Ehrhard Behrends Abstract A metric space (M, d) is said to have the small ball property (sbp) if for every ε 0 > 0 there exists a sequence
More informationLecture 12. F o s, (1.1) F t := s>t
Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let
More informationSet, functions and Euclidean space. Seungjin Han
Set, functions and Euclidean space Seungjin Han September, 2018 1 Some Basics LOGIC A is necessary for B : If B holds, then A holds. B A A B is the contraposition of B A. A is sufficient for B: If A holds,
More informationThe Liapunov Method for Determining Stability (DRAFT)
44 The Liapunov Method for Determining Stability (DRAFT) 44.1 The Liapunov Method, Naively Developed In the last chapter, we discussed describing trajectories of a 2 2 autonomous system x = F(x) as level
More informationAn Introduction to Non-Standard Analysis and its Applications
An Introduction to Non-Standard Analysis and its Applications Kevin O Neill March 6, 2014 1 Basic Tools 1.1 A Shortest Possible History of Analysis When Newton and Leibnitz practiced calculus, they used
More informationClass Meeting # 2: The Diffusion (aka Heat) Equation
MATH 8.52 COURSE NOTES - CLASS MEETING # 2 8.52 Introduction to PDEs, Fall 20 Professor: Jared Speck Class Meeting # 2: The Diffusion (aka Heat) Equation The heat equation for a function u(, x (.0.). Introduction
More informationLecture Notes Introduction to Ergodic Theory
Lecture Notes Introduction to Ergodic Theory Tiago Pereira Department of Mathematics Imperial College London Our course consists of five introductory lectures on probabilistic aspects of dynamical systems,
More informationModern Discrete Probability Branching processes
Modern Discrete Probability IV - Branching processes Review Sébastien Roch UW Madison Mathematics November 15, 2014 1 Basic definitions 2 3 4 Galton-Watson branching processes I Definition A Galton-Watson
More informationIEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion
IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there
More information