The Sherrington-Kirkpatrick model

Size: px
Start display at page:

Download "The Sherrington-Kirkpatrick model"

Transcription

1 Stat 36 Stochastic Processes on Graphs The Sherrington-Kirkpatrick model Andrea Montanari Lecture - 4/-4/00 The Sherrington-Kirkpatrick (SK) model was introduced by David Sherrington and Scott Kirkpatrick in 975 as a simple solvable (in their words) model for spin-glasses. Spin-glasses are some type of magnetic alloys, and solvable meant that the asymptotic free entropy density could be computed exactly. It turns out that the original SK solution was incorrect and in fact inconsistent (the authors knew this). A consistent conjecture for the asymptotic free energy per spin was put forward by Giorgio Parisi in 98, and derived through the non-rigorous replica method. It took 4 years to prove this conjecture. The final proof is due to Michel Talagrand (006) and is a real tour de force. In these two lectures we will prove that the asymptotic free entropy density exists and that it is upper bounded by the Parisi formula. The first result is due to Francesco Guerra and Fabio Toninelli [GT0], and the second to Guerra [Gue03]. They are based on an interpolation trick that was a authentic breakthrough eventually leading to Talagrand s proof. Definitions and the Parisi formula Let {J ij } i,j [n] be a collection of i.i.d. N(0, /(n)) random variables. The SK model is the random measure over x {+, } n defined by µ J,β,B (x) = { Z n (β, B) exp β J ij x i x j + B x i }, () Here Z n (β, B) is defined by the normalization condition x µ J,β,B(x) =. It is of course a random variable. An alternative but sometime useful formulation of the model consists in saying that { } µ β,b (x) = Z n (β, B) exp H(x), () where H(x) is a gaussian process indexed by x {+, } n with mean and covariance E H(x) = n B M x, Cov(H(x), H(y)) = nβ Q x,y. (3) Here M x, and Q x,y denote the empirical magnetizaton and empirical overlap, namely M x n x i, Q x,y n Throughout these lectures we will be interested in the free entropy density Parisi formula provides a surprising prediction for this quantity. x i y i. (4) φ n (β, B) n log Z n(β, B). (5) Definition. Let D be the space of non-decreasing functions x : [0, ] [0, ]. The Parisi functional is the function P : D R R R defined by P[x; β, B] = log + f(0, B; x) β 0 q x(q) dq, (6)

2 where f : [0, ] R D R, (q, y, x) f(q, y; x) is the unique solution of the partial differential equation f q + f y + x(q) with boundary condition f(, y; x) = log cosh βy. ( ) f = 0, (7) y The relation between Parisi formula and the SK free entropy is given by the following theorem. Theorem. Let φ n (β, B) be the free entropy density of a Sherrington-Kirkpatrick model with n variables. Then, almost surely lim φ n(β, B) = P (β, B) inf P[x; β, B]. (8) n x D At first sight, this result appears analogous to the one we proved in the first lecture for the Curie-Weiss model, that we reproduce here for convenience: lim n φcw n (β, B) = sup ϕ β,b (m), (9) m [,] ϕ β,b (m) H ( + m ) + β m + Bm. (0) Notice however two important and surprising differences: (i) The supremum in the last expression is replaced by an infimum in Eq. (8); (ii) The optimum is taken over a single real parameter in the Curie-Weiss model, and over a function in the SK case. Existence of the limit In this section we will prove that the limit n exists almost surely. Theorem 3. Let φ n (β, B) be the free entropy density of a Sherrington-Kirkpatrick model with n variables. Then, almost surely for some non-random quantity φ (β, B). lim φ n(β, B) = φ (β, B), () n Proof Let φ av n (β, B) = E φ n (β, B) be the expected free entropy density. (We will often drop the dependence on β, B in the following.) The proof is proceed in two steps. First one shows that φ n concentrates around φ av n, and therefore is sufficient to show that the latter (deterministic) sequence converges. This is proved by showing that the sequence {nφ av n } n 0 is superaddittive, and applying Fekete s lemma. Concentration follows from gaussian isoperimetry. Recall indeed that if F : R k R is Lipshitz continuous with modulus L (i.e. if F (a) F (b) L a b ) and Z = (Z,..., Z k ) is a vector of i.i.d. standard N(0σ ) random variables, then P{F (Z) E F (Z) u} e u /(Lσ). () It is easy to check that φ n is a Lipshitz continuous function with modulus β of the n random variables J ij, whence By Borel-Cantelli it is therefore sufficient to prove that φ av n P{ φ n φ av n u} e nu /β. (3) has a limit.

3 As mentioned, existence of the limit folllows from the superaddittivity of the sequence {nφ av n } n 0, i.e from the observation that, for any n, n N, letting n = n + n, we have n φ av n n φ av n + n φ av n. (4) In order to prove this inequality, define, for t [0, ], the gaussian process on the hypercube {H t (x)} x {+, } n with mean and covariance E H t (x) = n B M = n BM n BM, (5) Cov(H t (x), H t (y)) = β{ t nq + ( t)[n Q + n Q ] }. (6) where, letting [n] = V V with V = {,..., n }, V = {n +,..., n + n = n}, we defined, for a {, }, M a n a i V a x i, (7) Q a n a i V a x i y i. (8) Further M = (n /n)m + (n /n)m and Q = (n /n)q + (n /n)q are the magnetization and overlap defined in Eq. (4). Define { Φ(t) E log e Ht(x)}. (9) x It is obvious that Φ() = nφ av n. Further, for t = 0, one can represent the gaussian process as H 0 (x) = β Jijx i x j B x i (0) i,j V i V β Jijx i x j B x i, i,j V i V with J ij N(0, /(n )) i.i.d. and J ij N(0, /(n )). It follows that Φ(0) = n φ av n + n φ av n. The superaddittivity is proved by showing that the first derivative Φ (t) is non-negative. In order to compute this derivative, it is convenient to use the following Lemma. Lemma 4. For t [0, ], let {X k } k S be a finite collection of normal random variables, with covariance C kl (t) = E t [X k X l ] E t [X k ]E t [X l ], and mean a k = E t [X k ] independent of t. Then, for any polynomially bounded function F : R S R, d dt E t{f (X)} = k,l S dc k,l dt In order to use this formula for computing Φ (t), we use { H(x) H(y) log x { F } (t) E t (X) x k x l () e H(x)} = µ(x) I x=y µ(x)µ(y), () and d dt Cov(H t(x), H t (y)) = {( n β n n Q (x, y) + n ) n Q n (x, y) n Q (x, y) n n Q (x, y) }. (3) 3

4 Therefore the first term in Eq. () does not give any contribution to the derivative, and { } d log dt Cov(H x e H(x) t(x), H t (y)) = {( n H(x) H(y) β ne µ µ n Q + n ) n Q n n Q n } n Q, x,y which is non-negative by convexity of x x. The derivative Φ (t) is obtained by taking the expectation of the above and hence is non-negative as well. 3 RSB bounds Theorem 5. Let φ av n (β, B) be the expected free entropy density of a Sherrington-Kirkpatrick model with n variables. Then φ av n (β, B) P (β, B) inf P[x; β, B]. (4) x D Proof Since β, B are fixed throughout the proof, we will regard P : D R uniquely as a function of x : [0, ] [0, ]. An important simplifying remark is that P is Lipshitz continuous with respect to the L norm. More precisely, for x, x D, we have P[x] P[x ] β x x. (5) We refer to [Gue03] for a proof of this statement. It is therefore sufficient to prove φ av n (β, B) P[x] for x in a dense subset of D. The proof proceeds by considering the set K D K, where D K is the subset of functions x : [0, ] [0, ] non-decreasing and right-continuous which takes at most K distinct values in [0, ) (plus, eventually, x() = ). A function x D K is parameterized by two vectors 0 q 0 q q K q K q = and 0 = m 0 m m K m =, by letting, for q [0, ) x(q) = (The notation is here slightly different from [Gue03].) Of particular interest is the case K = 0 (replica symmetric, RS) where m i I { q [q i, q i ) }. (6) x(q) = { 0 if 0 q < q0, if q 0 q. (7) Also important in the analysis of other models is the case K = (one-step replica symmetry breaking, RSB) 0 if 0 q < q 0, x(q) = m if q 0 q < q, (8) if q q. The function x(q) actually achieving the inf P[x] has the interpretation of cumulative distribution of the overlap Q x,y for two configurations drawn from the random measure µ J µ J. The partial differential equation (7) is easily solved for x D K yielding amore explicit expression for P[x] = P K (m, q). We get P K (m, q) = E log Y 0 4 β 4 m i (q i q i ), (9)

5 where Y 0 is the random variable constructed as follows. Define, for X 0,..., X i.i.d. N(0, ), ( Y = cosh B + β where it is understood that q = 0. Then, recursively, for l = K, K,..., 0 ql q l X l ). (30) Y l = {E l+ (Y m l+ l+ )} /m l+. (3) where E l+ denotes expectation with respect to X l+. In particular in the replica symmetric case (K = 0), we have the following function of the only remaining parameter q 0 P 0 (q 0 ) = E log cosh(b + β q 0 X) + 4 β ( q 0 ). (3) We need now to prove that, for any K, and for any choice of the parameters {m i }, {q i }, we have φ av n P K (m, q). This is done by interpolation. Define, for t [0, ], H t (x) = β t J ij x i x j B x i β t ql q l n G l ix i, (33) with the {G l i } i.i.d. N(0, ) random variables. We then construct the partition functions {Z l(t)} 0 l by letting Z (t) x e Ht(x), (34) and then recursively for l {0,..., K} Z l (t) {E l+ (Z m l+ l+ (t))}/m l+, (35) where E l+ denotes expectation with respect to {G l+ i } i n. Finally, we define Φ(t) n E 0 log Z 0 (t). (36) Here E 0 is expectation with respect to both {G 0 i } i n and {J ij } i,j n. It is easy to see that Φ() = φ av n, since in this case H 0 (x) = H(x) is just the SK energy function. Further Φ(0) = E log Y 0 (with Y 0 defined as above) since Z (t) n { ( cosh B + β ql q l G l i)}, (37) is the product of n i.i.d. copies of Y. The proof is completed by evaluating the defivative of Φ(t) with respect to t. This takes the form Φ (t) = 4 β l= m l (q l q l ) 4 β K (m l+ m l ) (Q(x, y) q l ) l, (38) where l denotes expectation with respect to an apropriate measure on (x, y) {+, } n {+, } n. Since the argument in this expectation is a perfect square, it follows that Φ() Φ(0) 4 β l= 5 m l (q l q l ), (39)

6 which is our claim. Computing the derivative (38) is not particularly difficult, just a bit laborious. For the sake of simplicity, we will consider the case K = 0 (replica symmetric). In that case, applying the definitions we get We then have Φ(t) = β ( t)( q 0 ) + { n E log } e H b t(x), (40) Ĥ t (x) = β t J ij x i x j B Φ (t) = β ( q 0 ) + β n t x x i β t q 0 E{J ij µ t (x i x j )} β q 0 n t n G i x i. (4) E{G i µ t (x i )}, (4) where µ t is the probaility measure µ t (x) exp{ Ĥt(x)}, and µ t (x i x j ), µ t (x i ) denote the expectations of x i x j and x i. Using Stein s Lemma, we get where µ () t Φ (t) = β ( q 0 ) + β 4n = 4 β β 4n = 4 β β 4 E{µ() t E{µ t (x i x j) µ t (x i x j ) } β q 0 n E{µ () t (x i y i x j y j )} + β q 0 n (Q(x, y) )} + β q 0E{µ () t (Q(x, y))} E{µ t (x i ) µ t (x i ) } E{µ () t (x i y i )} = µ t µ t is the product distribution over x, y. By completing the square, we get which indeed coincides with Eq. (38) for K = 0. Φ (t) = 4 β ( q 0) β 4 E{µ() t ([Q(x, y) q 0 ] )}, References [GT0] Francesco Guerra and Fabio L. Toninelli. The thermodynamic limit in mean field spin glasses. Commun. Math. Phys., 30:7 79, 00. [Gue03] Francesco Guerra. Broken Replica Symmetry Bounds in the Mean Field Spin Glass Model. Comm. Math. Phys., 33:,

Quadratic replica coupling in the Sherrington-Kirkpatrick mean field spin glass model

Quadratic replica coupling in the Sherrington-Kirkpatrick mean field spin glass model arxiv:cond-mat/0201091v2 [cond-mat.dis-nn] 5 Mar 2002 Quadratic replica coupling in the Sherrington-Kirkpatrick mean field spin glass model Francesco Guerra Dipartimento di Fisica, Università di Roma La

More information

Universality in Sherrington-Kirkpatrick s Spin Glass Model

Universality in Sherrington-Kirkpatrick s Spin Glass Model Universality in Sherrington-Kirkpatrick s Spin Glass Model Philippe CARMOA, Yueyun HU 5th ovember 08 arxiv:math/0403359v mathpr May 004 Abstract We show that the limiting free energy in Sherrington-Kirkpatrick

More information

Spin glasses and Stein s method

Spin glasses and Stein s method UC Berkeley Spin glasses Magnetic materials with strange behavior. ot explained by ferromagnetic models like the Ising model. Theoretically studied since the 70 s. An important example: Sherrington-Kirkpatrick

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 12 Jun 2003

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 12 Jun 2003 arxiv:cond-mat/0306326v1 [cond-mat.dis-nn] 12 Jun 2003 CONVEX REPLICA SIMMETRY BREAKING FROM POSITIVITY AND THERMODYNAMIC LIMIT Pierluigi Contucci, Sandro Graffi Dipartimento di Matematica Università di

More information

Learning About Spin Glasses Enzo Marinari (Cagliari, Italy) I thank the organizers... I am thrilled since... (Realistic) Spin Glasses: a debated, inte

Learning About Spin Glasses Enzo Marinari (Cagliari, Italy) I thank the organizers... I am thrilled since... (Realistic) Spin Glasses: a debated, inte Learning About Spin Glasses Enzo Marinari (Cagliari, Italy) I thank the organizers... I am thrilled since... (Realistic) Spin Glasses: a debated, interesting issue. Mainly work with: Giorgio Parisi, Federico

More information

1 Functions of normal random variables

1 Functions of normal random variables Tel Aviv University, 200 Gaussian measures : results formulated Functions of normal random variables It is of course impossible to even think the word Gaussian without immediately mentioning the most important

More information

The Random Matching Problem

The Random Matching Problem The Random Matching Problem Enrico Maria Malatesta Universitá di Milano October 21st, 2016 Enrico Maria Malatesta (UniMi) The Random Matching Problem October 21st, 2016 1 / 15 Outline 1 Disordered Systems

More information

Is the Sherrington-Kirkpatrick Model relevant for real spin glasses?

Is the Sherrington-Kirkpatrick Model relevant for real spin glasses? Is the Sherrington-Kirkpatrick Model relevant for real spin glasses? A. P. Young Department of Physics, University of California, Santa Cruz, California 95064 E-mail: peter@physics.ucsc.edu Abstract. I

More information

A variational approach to Ising spin glasses in finite dimensions

A variational approach to Ising spin glasses in finite dimensions . Phys. A: Math. Gen. 31 1998) 4127 4140. Printed in the UK PII: S0305-447098)89176-2 A variational approach to Ising spin glasses in finite dimensions R Baviera, M Pasquini and M Serva Dipartimento di

More information

Gärtner-Ellis Theorem and applications.

Gärtner-Ellis Theorem and applications. Gärtner-Ellis Theorem and applications. Elena Kosygina July 25, 208 In this lecture we turn to the non-i.i.d. case and discuss Gärtner-Ellis theorem. As an application, we study Curie-Weiss model with

More information

The cavity method. Vingt ans après

The cavity method. Vingt ans après The cavity method Vingt ans après Les Houches lectures 1982 Early days with Giorgio SK model E = J ij s i s j i

More information

Phase transitions in discrete structures

Phase transitions in discrete structures Phase transitions in discrete structures Amin Coja-Oghlan Goethe University Frankfurt Overview 1 The physics approach. [following Mézard, Montanari 09] Basics. Replica symmetry ( Belief Propagation ).

More information

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS The Annals of Probability 2001, Vol. 29, No. 1, 411 417 MAJORIZING MEASURES WITHOUT MEASURES By Michel Talagrand URA 754 AU CNRS We give a reformulation of majorizing measures that does not involve measures,

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

Ferromagnets and the classical Heisenberg model. Kay Kirkpatrick, UIUC

Ferromagnets and the classical Heisenberg model. Kay Kirkpatrick, UIUC Ferromagnets and the classical Heisenberg model Kay Kirkpatrick, UIUC Ferromagnets and the classical Heisenberg model: asymptotics for a mean-field phase transition Kay Kirkpatrick, Urbana-Champaign June

More information

LECTURE 3. Last time:

LECTURE 3. Last time: LECTURE 3 Last time: Mutual Information. Convexity and concavity Jensen s inequality Information Inequality Data processing theorem Fano s Inequality Lecture outline Stochastic processes, Entropy rate

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15. IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random

More information

arxiv:cond-mat/ v3 16 Sep 2003

arxiv:cond-mat/ v3 16 Sep 2003 replica equivalence in the edwards-anderson model arxiv:cond-mat/0302500 v3 16 Sep 2003 Pierluigi Contucci Dipartimento di Matematica Università di Bologna, 40127 Bologna, Italy e-mail: contucci@dm.unibo.it

More information

Concentration inequalities and the entropy method

Concentration inequalities and the entropy method Concentration inequalities and the entropy method Gábor Lugosi ICREA and Pompeu Fabra University Barcelona what is concentration? We are interested in bounding random fluctuations of functions of many

More information

Problem set 1, Real Analysis I, Spring, 2015.

Problem set 1, Real Analysis I, Spring, 2015. Problem set 1, Real Analysis I, Spring, 015. (1) Let f n : D R be a sequence of functions with domain D R n. Recall that f n f uniformly if and only if for all ɛ > 0, there is an N = N(ɛ) so that if n

More information

Math 421, Homework #9 Solutions

Math 421, Homework #9 Solutions Math 41, Homework #9 Solutions (1) (a) A set E R n is said to be path connected if for any pair of points x E and y E there exists a continuous function γ : [0, 1] R n satisfying γ(0) = x, γ(1) = y, and

More information

The Parisi Variational Problem

The Parisi Variational Problem The Parisi Variational Problem Aukosh S. Jagannath 1 joint work with Ian Tobasco 1 1 New York University Courant Institute of Mathematical Sciences September 28, 2015 Aukosh S. Jagannath (NYU) The Parisi

More information

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST

Information Theory. Lecture 5 Entropy rate and Markov sources STEFAN HÖST Information Theory Lecture 5 Entropy rate and Markov sources STEFAN HÖST Universal Source Coding Huffman coding is optimal, what is the problem? In the previous coding schemes (Huffman and Shannon-Fano)it

More information

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by

δ xj β n = 1 n Theorem 1.1. The sequence {P n } satisfies a large deviation principle on M(X) with the rate function I(β) given by . Sanov s Theorem Here we consider a sequence of i.i.d. random variables with values in some complete separable metric space X with a common distribution α. Then the sample distribution β n = n maps X

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

Appendix A Stability Analysis of the RS Solution

Appendix A Stability Analysis of the RS Solution Appendix A Stability Analysis of the RS Solution In this appendix we derive the eigenvalues of the Hessian matrix of the model in Eq. 3.33 for the RS ansaltz. In this appendix we use the explicit form

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

bipartite mean field spin systems. existence and solution Dipartimento di Matematica, Università di Bologna,

bipartite mean field spin systems. existence and solution Dipartimento di Matematica, Università di Bologna, M P E J Mathematical Physics Electronic Journal ISS 086-6655 Volume 4, 2008 Paper Received: ov 5, 2007, Revised: Mai 8, 2008, Accepted: Mai 6, 2008 Editor: B. achtergaele bipartite mean field spin systems.

More information

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 2 Sep 1998

arxiv:cond-mat/ v1 [cond-mat.dis-nn] 2 Sep 1998 Scheme of the replica symmetry breaking for short- ranged Ising spin glass within the Bethe- Peierls method. arxiv:cond-mat/9809030v1 [cond-mat.dis-nn] 2 Sep 1998 K. Walasek Institute of Physics, Pedagogical

More information

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,

More information

Information Theory and Statistics Lecture 3: Stationary ergodic processes

Information Theory and Statistics Lecture 3: Stationary ergodic processes Information Theory and Statistics Lecture 3: Stationary ergodic processes Łukasz Dębowski ldebowsk@ipipan.waw.pl Ph. D. Programme 2013/2014 Measurable space Definition (measurable space) Measurable space

More information

The glass transition as a spin glass problem

The glass transition as a spin glass problem The glass transition as a spin glass problem Mike Moore School of Physics and Astronomy, University of Manchester UBC 2007 Co-Authors: Joonhyun Yeo, Konkuk University Marco Tarzia, Saclay Mike Moore (Manchester)

More information

SYMMETRIC LANGEVIN SPIN GLASS DYNAMICS. By G. Ben Arous and A. Guionnet Ecole Normale Superieure and Université de Paris Sud

SYMMETRIC LANGEVIN SPIN GLASS DYNAMICS. By G. Ben Arous and A. Guionnet Ecole Normale Superieure and Université de Paris Sud The Annals of Probability 997, Vol. 5, No. 3, 367 4 SYMMETRIC LANGEVIN SPIN GLASS DYNAMICS By G. Ben Arous and A. Guionnet Ecole Normale Superieure and Université de Paris Sud We study the asymptotic behavior

More information

FLUCTUATIONS OF THE LOCAL MAGNETIC FIELD IN FRUSTRATED MEAN-FIELD ISING MODELS

FLUCTUATIONS OF THE LOCAL MAGNETIC FIELD IN FRUSTRATED MEAN-FIELD ISING MODELS FLUCTUATIOS OF THE LOCAL MAGETIC FIELD I FRUSTRATED MEA-FIELD ISIG MODELS T. C. DORLAS AD W. M. B. DUKES Abstract. We consider fluctuations of the local magnetic field in frustrated mean-field Ising models.

More information

Mean field behaviour of spin systems with orthogonal interaction matrix

Mean field behaviour of spin systems with orthogonal interaction matrix Mean field behaviour of spin systems with orthogonal interaction matrix arxiv:math-ph/006022 v 2 Jun 200 P. Contucci (a), S. Graffi (a), S. Isola (b) a) Dipartimento di Matematica Università di Bologna,

More information

Combinatorics in Banach space theory Lecture 12

Combinatorics in Banach space theory Lecture 12 Combinatorics in Banach space theory Lecture The next lemma considerably strengthens the assertion of Lemma.6(b). Lemma.9. For every Banach space X and any n N, either all the numbers n b n (X), c n (X)

More information

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote Real Variables, Fall 4 Problem set 4 Solution suggestions Exercise. Let f be of bounded variation on [a, b]. Show that for each c (a, b), lim x c f(x) and lim x c f(x) exist. Prove that a monotone function

More information

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: Oct. 1 The Dirichlet s P rinciple In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation: 1. Dirichlet s Principle. u = in, u = g on. ( 1 ) If we multiply

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s.

If Y and Y 0 satisfy (1-2), then Y = Y 0 a.s. 20 6. CONDITIONAL EXPECTATION Having discussed at length the limit theory for sums of independent random variables we will now move on to deal with dependent random variables. An important tool in this

More information

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture : Mutual Information Method Lecturer: Yihong Wu Scribe: Jaeho Lee, Mar, 06 Ed. Mar 9 Quick review: Assouad s lemma

More information

Concentration Inequalities

Concentration Inequalities Chapter Concentration Inequalities I. Moment generating functions, the Chernoff method, and sub-gaussian and sub-exponential random variables a. Goal for this section: given a random variable X, how does

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results

Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Introduction to Empirical Processes and Semiparametric Inference Lecture 12: Glivenko-Cantelli and Donsker Results Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

Energy-Decreasing Dynamics in Mean-Field Spin Models

Energy-Decreasing Dynamics in Mean-Field Spin Models arxiv:cond-mat/0210545 v1 24 Oct 2002 Energy-Decreasing Dynamics in Mean-Field Spin Models L. Bussolari, P. Contucci, M. Degli Esposti, C. Giardinà Dipartimento di Matematica dell Università di Bologna,

More information

4. Conditional risk measures and their robust representation

4. Conditional risk measures and their robust representation 4. Conditional risk measures and their robust representation We consider a discrete-time information structure given by a filtration (F t ) t=0,...,t on our probability space (Ω, F, P ). The time horizon

More information

THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES. Lenka Zdeborová

THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES. Lenka Zdeborová THE PHYSICS OF COUNTING AND SAMPLING ON RANDOM INSTANCES Lenka Zdeborová (CEA Saclay and CNRS, France) MAIN CONTRIBUTORS TO THE PHYSICS UNDERSTANDING OF RANDOM INSTANCES Braunstein, Franz, Kabashima, Kirkpatrick,

More information

The Codimension of the Zeros of a Stable Process in Random Scenery

The Codimension of the Zeros of a Stable Process in Random Scenery The Codimension of the Zeros of a Stable Process in Random Scenery Davar Khoshnevisan The University of Utah, Department of Mathematics Salt Lake City, UT 84105 0090, U.S.A. davar@math.utah.edu http://www.math.utah.edu/~davar

More information

A diluted version of the perceptron model

A diluted version of the perceptron model Stochastic Processes and their Applications 117 2007) 1764 1792 www.elsevier.com/locate/spa A diluted version of the perceptron model David Márquez-Carreras a Carles Rovira a Samy Tindel b a Facultat de

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Lecture 3: Oct 7, 2014

Lecture 3: Oct 7, 2014 Information and Coding Theory Autumn 04 Lecturer: Shi Li Lecture : Oct 7, 04 Scribe: Mrinalkanti Ghosh In last lecture we have seen an use of entropy to give a tight upper bound in number of triangles

More information

CPSC 536N: Randomized Algorithms Term 2. Lecture 9

CPSC 536N: Randomized Algorithms Term 2. Lecture 9 CPSC 536N: Randomized Algorithms 2011-12 Term 2 Prof. Nick Harvey Lecture 9 University of British Columbia 1 Polynomial Identity Testing In the first lecture we discussed the problem of testing equality

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

Lecture 22: Final Review

Lecture 22: Final Review Lecture 22: Final Review Nuts and bolts Fundamental questions and limits Tools Practical algorithms Future topics Dr Yao Xie, ECE587, Information Theory, Duke University Basics Dr Yao Xie, ECE587, Information

More information

arxiv: v1 [math.pr] 14 Aug 2017

arxiv: v1 [math.pr] 14 Aug 2017 Uniqueness of Gibbs Measures for Continuous Hardcore Models arxiv:1708.04263v1 [math.pr] 14 Aug 2017 David Gamarnik Kavita Ramanan Abstract We formulate a continuous version of the well known discrete

More information

OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana

OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE. Bruce Hajek* University of Illinois at Champaign-Urbana OPTIMIZATION BY SIMULATED ANNEALING: A NECESSARY AND SUFFICIENT CONDITION FOR CONVERGENCE Bruce Hajek* University of Illinois at Champaign-Urbana A Monte Carlo optimization technique called "simulated

More information

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016

MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 MGMT 69000: Topics in High-dimensional Data Analysis Falll 2016 Lecture 14: Information Theoretic Methods Lecturer: Jiaming Xu Scribe: Hilda Ibriga, Adarsh Barik, December 02, 2016 Outline f-divergence

More information

The large deviation principle for the Erdős-Rényi random graph

The large deviation principle for the Erdős-Rényi random graph The large deviation principle for the Erdős-Rényi random graph (Courant Institute, NYU) joint work with S. R. S. Varadhan Main objective: how to count graphs with a given property Only consider finite

More information

The Ghirlanda-Guerra identities

The Ghirlanda-Guerra identities The Ghirlanda-Guerra identities Citation for published version (APA): Giardinà, C., & Contucci, P. (25). The Ghirlanda-Guerra identities. (Report Eurandom; Vol. 2542). Eindhoven: Eurandom. Document status

More information

Lecture 5 - Information theory

Lecture 5 - Information theory Lecture 5 - Information theory Jan Bouda FI MU May 18, 2012 Jan Bouda (FI MU) Lecture 5 - Information theory May 18, 2012 1 / 42 Part I Uncertainty and entropy Jan Bouda (FI MU) Lecture 5 - Information

More information

Ferromagnetic Ising models

Ferromagnetic Ising models Stat 36 Stochastic Processes on Graphs Ferromagnetic Ising models Amir Dembo Lecture 3-4 - 0/5/2007 Introduction By ferromagnetic Ising models we mean the study of the large-n behavior of the measures

More information

Lecture 14 February 28

Lecture 14 February 28 EE/Stats 376A: Information Theory Winter 07 Lecture 4 February 8 Lecturer: David Tse Scribe: Sagnik M, Vivek B 4 Outline Gaussian channel and capacity Information measures for continuous random variables

More information

Vector measures of bounded γ-variation and stochastic integrals

Vector measures of bounded γ-variation and stochastic integrals Vector measures of bounded γ-variation and stochastic integrals Jan van Neerven and Lutz Weis bstract. We introduce the class of vector measures of bounded γ-variation and study its relationship with vector-valued

More information

p D q EA Figure 3: F as a function of p d plotted for p d ' q EA, t = 0:05 and y =?0:01.

p D q EA Figure 3: F as a function of p d plotted for p d ' q EA, t = 0:05 and y =?0:01. F p D q EA Figure 3: F as a function of p d plotted for p d ' q EA, t = :5 and y =?:. 5 F p A Figure : F as a function of p d plotted for < p d < 2p A, t = :5 and y =?:. F p A p max p p p D q EA Figure

More information

Proofs for Large Sample Properties of Generalized Method of Moments Estimators

Proofs for Large Sample Properties of Generalized Method of Moments Estimators Proofs for Large Sample Properties of Generalized Method of Moments Estimators Lars Peter Hansen University of Chicago March 8, 2012 1 Introduction Econometrica did not publish many of the proofs in my

More information

Convergence of Feller Processes

Convergence of Feller Processes Chapter 15 Convergence of Feller Processes This chapter looks at the convergence of sequences of Feller processes to a iting process. Section 15.1 lays some ground work concerning weak convergence of processes

More information

PERSPECTIVES ON SPIN GLASSES

PERSPECTIVES ON SPIN GLASSES PERSPECTIVES ON SPIN GLASSES Presenting and developing the theory of spin glasses as a prototype for complex systems, this book is a rigorous and up-to-date introduction to their properties. The book combines

More information

Estimates for probabilities of independent events and infinite series

Estimates for probabilities of independent events and infinite series Estimates for probabilities of independent events and infinite series Jürgen Grahl and Shahar evo September 9, 06 arxiv:609.0894v [math.pr] 8 Sep 06 Abstract This paper deals with finite or infinite sequences

More information

The Sherrington-Kirkpatrick model: an overview

The Sherrington-Kirkpatrick model: an overview The Sherrington-Kirkpatrick model: an overview Dmitry Panchenko arxiv:1211.1094v1 [math-ph] 6 ov 2012 ovember 7, 2012 Abstract The goal of this paper is to review some of the main ideas that emerged from

More information

I forgot to mention last time: in the Ito formula for two standard processes, putting

I forgot to mention last time: in the Ito formula for two standard processes, putting I forgot to mention last time: in the Ito formula for two standard processes, putting dx t = a t dt + b t db t dy t = α t dt + β t db t, and taking f(x, y = xy, one has f x = y, f y = x, and f xx = f yy

More information

Order-parameter fluctuations (OPF) in spin glasses: Monte Carlo simulations and exact results for small sizes

Order-parameter fluctuations (OPF) in spin glasses: Monte Carlo simulations and exact results for small sizes Eur. Phys. J. B 19, 565 582 (21) HE EUROPEAN PHYSICAL JOURNAL B c EDP Sciences Società Italiana di Fisica Springer-Verlag 21 Order-parameter fluctuations (OPF) in spin glasses: Monte Carlo simulations

More information

Advanced Combinatorial Optimization Feb 13 and 25, Lectures 4 and 6

Advanced Combinatorial Optimization Feb 13 and 25, Lectures 4 and 6 18.438 Advanced Combinatorial Optimization Feb 13 and 25, 2014 Lectures 4 and 6 Lecturer: Michel X. Goemans Scribe: Zhenyu Liao and Michel X. Goemans Today, we will use an algebraic approach to solve the

More information

Random Fermionic Systems

Random Fermionic Systems Random Fermionic Systems Fabio Cunden Anna Maltsev Francesco Mezzadri University of Bristol December 9, 2016 Maltsev (University of Bristol) Random Fermionic Systems December 9, 2016 1 / 27 Background

More information

LECTURE 2 NOTES. 1. Minimal sufficient statistics.

LECTURE 2 NOTES. 1. Minimal sufficient statistics. LECTURE 2 NOTES 1. Minimal sufficient statistics. Definition 1.1 (minimal sufficient statistic). A statistic t := φ(x) is a minimial sufficient statistic for the parametric model F if 1. it is sufficient.

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

The Replica Approach: Spin and Structural Glasses

The Replica Approach: Spin and Structural Glasses The Replica Approach: Spin and Structural Glasses Daniel Sussman December 18, 2008 Abstract A spin glass is a type of magnetic ordering that can emerge, most transparently, when a system is both frustrated

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

Through Kaleidoscope Eyes Spin Glasses Experimental Results and Theoretical Concepts

Through Kaleidoscope Eyes Spin Glasses Experimental Results and Theoretical Concepts Through Kaleidoscope Eyes Spin Glasses Experimental Results and Theoretical Concepts Benjamin Hsu December 11, 2007 Abstract A spin glass describes a system of spins on a lattice (or a crystal) where the

More information

1 Glivenko-Cantelli type theorems

1 Glivenko-Cantelli type theorems STA79 Lecture Spring Semester Glivenko-Cantelli type theorems Given i.i.d. observations X,..., X n with unknown distribution function F (t, consider the empirical (sample CDF ˆF n (t = I [Xi t]. n Then

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

ECE 4400:693 - Information Theory

ECE 4400:693 - Information Theory ECE 4400:693 - Information Theory Dr. Nghi Tran Lecture 8: Differential Entropy Dr. Nghi Tran (ECE-University of Akron) ECE 4400:693 Lecture 1 / 43 Outline 1 Review: Entropy of discrete RVs 2 Differential

More information

1 Definition of the Riemann integral

1 Definition of the Riemann integral MAT337H1, Introduction to Real Analysis: notes on Riemann integration 1 Definition of the Riemann integral Definition 1.1. Let [a, b] R be a closed interval. A partition P of [a, b] is a finite set of

More information

Statistical Physics on Sparse Random Graphs: Mathematical Perspective

Statistical Physics on Sparse Random Graphs: Mathematical Perspective Statistical Physics on Sparse Random Graphs: Mathematical Perspective Amir Dembo Stanford University Northwestern, July 19, 2016 x 5 x 6 Factor model [DM10, Eqn. (1.4)] x 1 x 2 x 3 x 4 x 9 x8 x 7 x 10

More information

Fluctuations of the free energy of spherical spin glass

Fluctuations of the free energy of spherical spin glass Fluctuations of the free energy of spherical spin glass Jinho Baik University of Michigan 2015 May, IMS, Singapore Joint work with Ji Oon Lee (KAIST, Korea) Random matrix theory Real N N Wigner matrix

More information

Information, Physics, and Computation

Information, Physics, and Computation Information, Physics, and Computation Marc Mezard Laboratoire de Physique Thdorique et Moales Statistiques, CNRS, and Universit y Paris Sud Andrea Montanari Department of Electrical Engineering and Department

More information

Hands-On Learning Theory Fall 2016, Lecture 3

Hands-On Learning Theory Fall 2016, Lecture 3 Hands-On Learning Theory Fall 016, Lecture 3 Jean Honorio jhonorio@purdue.edu 1 Information Theory First, we provide some information theory background. Definition 3.1 (Entropy). The entropy of a discrete

More information

ECE 275B Homework # 1 Solutions Version Winter 2015

ECE 275B Homework # 1 Solutions Version Winter 2015 ECE 275B Homework # 1 Solutions Version Winter 2015 1. (a) Because x i are assumed to be independent realizations of a continuous random variable, it is almost surely (a.s.) 1 the case that x 1 < x 2

More information

Lecture 7 Introduction to Statistical Decision Theory

Lecture 7 Introduction to Statistical Decision Theory Lecture 7 Introduction to Statistical Decision Theory I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw December 20, 2016 1 / 55 I-Hsiang Wang IT Lecture 7

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem

LECTURE 15. Last time: Feedback channel: setting up the problem. Lecture outline. Joint source and channel coding theorem LECTURE 15 Last time: Feedback channel: setting up the problem Perfect feedback Feedback capacity Data compression Lecture outline Joint source and channel coding theorem Converse Robustness Brain teaser

More information

ON A NONHIERARCHICAL VERSION OF THE GENERALIZED RANDOM ENERGY MODEL 1. BY ERWIN BOLTHAUSEN AND NICOLA KISTLER Universität Zürich

ON A NONHIERARCHICAL VERSION OF THE GENERALIZED RANDOM ENERGY MODEL 1. BY ERWIN BOLTHAUSEN AND NICOLA KISTLER Universität Zürich The Annals of Applied Probability 2006, Vol. 16, No. 1, 1 14 DOI: 10.1214/105051605000000665 Institute of Mathematical Statistics, 2006 ON A NONHIERARCHICAL VERSION OF THE GENERALIZED RANDOM ENERGY MODEL

More information

On the convergence of sequences of random variables: A primer

On the convergence of sequences of random variables: A primer BCAM May 2012 1 On the convergence of sequences of random variables: A primer Armand M. Makowski ECE & ISR/HyNet University of Maryland at College Park armand@isr.umd.edu BCAM May 2012 2 A sequence a :

More information

CS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018

CS229T/STATS231: Statistical Learning Theory. Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 CS229T/STATS231: Statistical Learning Theory Lecturer: Tengyu Ma Lecture 11 Scribe: Jongho Kim, Jamie Kang October 29th, 2018 1 Overview This lecture mainly covers Recall the statistical theory of GANs

More information

Spectral Continuity Properties of Graph Laplacians

Spectral Continuity Properties of Graph Laplacians Spectral Continuity Properties of Graph Laplacians David Jekel May 24, 2017 Overview Spectral invariants of the graph Laplacian depend continuously on the graph. We consider triples (G, x, T ), where G

More information

Lecture 2: Convergence of Random Variables

Lecture 2: Convergence of Random Variables Lecture 2: Convergence of Random Variables Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Introduction to Stochastic Processes, Fall 2013 1 / 9 Convergence of Random Variables

More information