Invariances in spectral estimates. Paris-Est Marne-la-Vallée, January 2011

Similar documents
Poincaré Inequalities and Moment Maps

Convex inequalities, isoperimetry and spectral gap III

COMMENT ON : HYPERCONTRACTIVITY OF HAMILTON-JACOBI EQUATIONS, BY S. BOBKOV, I. GENTIL AND M. LEDOUX

On John type ellipsoids

High-dimensional distributions with convexity properties

Logarithmic Sobolev Inequalities

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Heat Flows, Geometric and Functional Inequalities

Inverse Brascamp-Lieb inequalities along the Heat equation

Weak and strong moments of l r -norms of log-concave vectors

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS

Lecture Notes on PDEs

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

On isotropicity with respect to a measure

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES. December, 2014

Basic Properties of Metric and Normed Spaces

Probabilistic Methods in Asymptotic Geometric Analysis.

Integral Jensen inequality

Geometry of log-concave Ensembles of random matrices

Functional Analysis Exercise Class

Needle decompositions and Ricci curvature

Moment Measures. D. Cordero-Erausquin 1 and B. Klartag 2

9 Radon-Nikodym theorem and conditioning

Stein s method, logarithmic Sobolev and transport inequalities

A Lévy-Fokker-Planck equation: entropies and convergence to equilibrium

KLS-type isoperimetric bounds for log-concave probability measures

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Moment Measures. Bo az Klartag. Tel Aviv University. Talk at the asymptotic geometric analysis seminar. Tel Aviv, May 2013

Sobolev Spaces. Chapter 10

A note on the convex infimum convolution inequality

Concentration inequalities: basics and some new challenges

Friedrich symmetric systems

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

4.2. ORTHOGONALITY 161

Analysis Comprehensive Exam Questions Fall 2008

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HYPERCONTRACTIVE MEASURES, TALAGRAND S INEQUALITY, AND INFLUENCES

Ahmed Mohammed. Harnack Inequality for Non-divergence Structure Semi-linear Elliptic Equations

Pointwise estimates for marginals of convex bodies

ALEXANDER KOLDOBSKY AND ALAIN PAJOR. Abstract. We prove that there exists an absolute constant C so that µ(k) C p max. ξ S n 1 µ(k ξ ) K 1/n

Approximately Gaussian marginals and the hyperplane conjecture

BOUNDS ON THE DEFICIT IN THE LOGARITHMIC SOBOLEV INEQUALITY

UNIVERSITY OF MANITOBA

Spectral gap of a class of unbounded, positive-definite operators

On the isotropic constant of marginals

1 Math 241A-B Homework Problem List for F2015 and W2016

Spectral theory for compact operators on Banach spaces

On the distribution of the ψ 2 -norm of linear functionals on isotropic convex bodies

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional

Chapter 8 Integral Operators

ON THE POINCARÉ CONSTANT OF LOG-CONCAVE MEASURES. Université de Toulouse Université Clermont-Auvergne

Generalized Orlicz spaces and Wasserstein distances for convex concave scale functions

Transport Continuity Property

The Cartan Dieudonné Theorem

Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions

(x, y) = d(x, y) = x y.

ON THE ISOTROPY CONSTANT OF PROJECTIONS OF POLYTOPES

COUNTEREXAMPLES TO THE COARSE BAUM-CONNES CONJECTURE. Nigel Higson. Unpublished Note, 1999

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

6.2 Fubini s Theorem. (µ ν)(c) = f C (x) dµ(x). (6.2) Proof. Note that (X Y, A B, µ ν) must be σ-finite as well, so that.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

2 Lebesgue integration

On some weighted fractional porous media equations

PDEs in Image Processing, Tutorials

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Some Applications of Mass Transport to Gaussian-Type Inequalities

Math 123 Homework Assignment #2 Due Monday, April 21, 2008

THE L 2 -HODGE THEORY AND REPRESENTATION ON R n

Star bodies with completely symmetric sections

Large deviations for random projections of l p balls

Φ entropy inequalities and asymmetric covariance estimates for convex measures

Linear Analysis Lecture 5

Fall TMA4145 Linear Methods. Solutions to exercise set 9. 1 Let X be a Hilbert space and T a bounded linear operator on X.

INTEGRATION OF FORMS. 3.1 Introduction. This is page 105 Printer: Opaque this CHAPTER 3

COMPLETE METRIC SPACES AND THE CONTRACTION MAPPING THEOREM

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS

Euler Equations: local existence

curvature, mixing, and entropic interpolation Simons Feb-2016 and CSE 599s Lecture 13

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

Introduction. Hausdorff Measure on O-Minimal Structures. Outline

(somewhat) expanded version of the note in C. R. Acad. Sci. Paris 340, (2005). A (ONE-DIMENSIONAL) FREE BRUNN-MINKOWSKI INEQUALITY

Optimization and Optimal Control in Banach Spaces

AW -Convergence and Well-Posedness of Non Convex Functions

TD 1: Hilbert Spaces and Applications

University of Luxembourg. Master in Mathematics. Student project. Compressed sensing. Supervisor: Prof. I. Nourdin. Author: Lucien May

Ricci curvature for metric-measure spaces via optimal transport

Exponential tail inequalities for eigenvalues of random matrices

Solutions to Tutorial 11 (Week 12)

Introduction to finite element exterior calculus

Dimensional behaviour of entropy and information

NUMERICAL RADIUS OF A HOLOMORPHIC MAPPING

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

Metric Spaces and Topology

Transcription:

Invariances in spectral estimates Franck Barthe Dario Cordero-Erausquin Paris-Est Marne-la-Vallée, January 2011

Notation

Notation Given a probability measure ν on some Euclidean space, the Poincaré constant c p (ν) is the best constant such that : g L 2 (ν), ( 2 Var ν (g) := g g dν) dµ cp (ν) g 2 dµ.

Notation Given a probability measure ν on some Euclidean space, the Poincaré constant c p (ν) is the best constant such that : g L 2 (ν), ( 2 Var ν (g) := g g dν) dµ cp (ν) g 2 dµ. For a random vector X µ we write c p (X) = c p (µ), i.e. we ask Var ( g(x) ) c p (X) E [ g(x) 2].

Notation Given a probability measure ν on some Euclidean space, the Poincaré constant c p (ν) is the best constant such that : g L 2 (ν), ( 2 Var ν (g) := g g dν) dµ cp (ν) g 2 dµ. For a random vector X µ we write c p (X) = c p (µ), i.e. we ask Var ( g(x) ) c p (X) E [ g(x) 2]. For a probability measure µ on R n with density e V, or for a random vector X µ, we define the group of isometries leaving µ or X invariant as O n (µ) = O n (X) := {R O n ; RX X} = {R O n ; V R = V }. Furthermore, for any isometry R O n set Fix(R) := {x R n ; Rx = x}.

Open problems for log-concave distributions

Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id.

Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id. KLS conjecture: Variance conjecture: sup n sup µ isot. c p (µ) < +. Var X 2 sup sup n X E X 2 < +.

Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id. KLS conjecture: Variance conjecture: sup n sup µ isot. c p (µ) < +. Var X 2 sup sup n X E X 2 < +. Hard, but partial (dimensional depending) results are know, connected to deep facts on the distribution of mass for log-concave measure and convex bodies in high-dimensions.

Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id. KLS conjecture: Variance conjecture: sup n sup µ isot. c p (µ) < +. Var X 2 sup sup n X E X 2 < +. Hard, but partial (dimensional depending) results are know, connected to deep facts on the distribution of mass for log-concave measure and convex bodies in high-dimensions. Klartag proved the variance conjecture holds for the class of unconditionnal measures.

Hörmarnder s L 2 -method

Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x).

Hörmarnder s L 2 -method Sketch of the proof Introduce the differential operator Lu := u V u. L is negative self-adjoint on L 2 (µ) with ulv dµ = u v dµ. We have ker(l) = {constant functions}. Crucial integration by parts formula: for all (smooth) u, D (Lu) 2 dµ = D 2 V (x) u(x) u(x) dµ(x) + 2 u(x) 2 dµ So the right-hand side in 2) is exactly (Lu) 2 dµ. Proof of 2) 1). For f L 2 (µ) with f dµ = 0, introduce u L 2 (µ) such that f = Lu and use f 2 dµ = flu dµ f 2 dµ C u 2 dµ f 2 dµ (Lu) 2 dµ.

Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x).

Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x). Formally, for the (negative) operator L = V on L 2 (µ) 1 expresses that L 1 C. 2 expresses that ( L) 2 1 C ( L).

Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x). The equivalence also holds for classes of O n (µ) invariant functions, because L commutes with the such isometries. More precisely, for R O n (µ) (i.e. if V is R-invariant), we have the equivalence: 1 For every function f L 2 (µ) that is R-invariant, we have... 2 For every function u L 2 (µ) that is R-invariant, we have...

Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x).

Invariances

Invariances We write P F for the orthogonal projection onto a subspace. We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. i=1 i.e. m c i P Ei v 2 = v 2 v R n. i=1

Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. i=1

Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. Consider the case of hyperplanes symmetries (=reflections) : S u (x) = x 2(x u)u. Note that Fix(S u ) = Ru. (HYP ) There exists reflections S u1,..., S um such that Fix(S uj ) = {0}, i=1

Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. Consider the case of hyperplanes symmetries (=reflections) : S u (x) = x 2(x u)u. Note that Fix(S u ) = Ru. (HYP ) There exists reflections S u1,..., S um such that Fix(S uj ) = {0}, Then we have : (HYP ) = (HYP). i=1

Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. Consider the case of hyperplanes symmetries (=reflections) : S u (x) = x 2(x u)u. Note that Fix(S u ) = Ru. (HYP ) There exists reflections S u1,..., S um such that Fix(S uj ) = {0}, Then we have : (HYP ) = (HYP). Examples : If µ is unconditional : take S e1,..., S en O n (µ) and note ei e i = Id. If µ has the invariances of the simplex n, i.e µ invariant under permutation of the coordinates (note that O n ( n ) = S n ). Then we have (HYP ) and (HYP). Setting u ij = e i e j 2 for i j, we have S uij O n (µ) and 1 n 1 i=1 u ij u ij = Id. i j

Restrictions

Restrictions For F R n and E = F, and µ a measure on R n with density e V, define the measure µ x,e to be the normalized restriction of µ to x + E = P F x + E, i.e. the probability measure on E given by ( ) dµ x,e (y) := e V y + P F x dy E e V (z+p F x) dz, y E. In other words, if X µ, then E ( X P F X = P F x ) µ x,e.

Restrictions For F R n and E = F, and µ a measure on R n with density e V, define the measure µ x,e to be the normalized restriction of µ to x + E = P F x + E, i.e. the probability measure on E given by ( ) dµ x,e (y) := e V y + P F x dy E e V (z+p F x) dz, y E. In other words, if X µ, then E ( X P F X = P F x ) µ x,e. Theorem Let X µ be a log-concave probability measure on R n verifying (HYP). Then, for every smooth function f : R n R such that f R i = f for all i m, we have [ m ] Var µ (f) E c p (X P E i X) c i P Ei f(x) 2. i=1

Restrictions For F R n and E = F, and µ a measure on R n with density e V, define the measure µ x,e to be the normalized restriction of µ to x + E = P F x + E, i.e. the probability measure on E given by ( ) dµ x,e (y) := e V y + P F x dy E e V (z+p F x) dz, y E. In other words, if X µ, then E ( X P F X = P F x ) µ x,e. Theorem Let X µ be a log-concave probability measure on R n verifying (HYP). Then, for every smooth function f : R n R such that f R i = f for all i m, we have [ m ] Var µ (f) E c p (X P E i X) c i P Ei f(x) 2. i=1 ( ) m = c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1

Ideas of the argument

Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ.

Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ.

Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ. So on every line in the direction θ we have E [( θ u)(x) P θ X] = 0 and therefore

Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ. So on every line in the direction θ we have E [( θ u)(x) P θ X] = 0 and therefore E [ ( θ u) 2 (X) P θ X ] c p (X P θ X) E [ ( θθu) 2 2 (X) P θ X ]. [ 1 ] Therefore we have E c p (X P θ X) ( θu) 2 (X) E [ ( θθu) 2 2 (X) ].

Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ. So on every line in the direction θ we have E [( θ u)(x) P θ X] = 0 and therefore E [ ( θ u) 2 (X) P θ X ] c p (X P θ X) E [ ( θθu) 2 2 (X) P θ X ]. [ 1 ] Therefore we have E c p (X P θ X) ( θu) 2 (X) E [ ( θθu) 2 2 (X) ]. Putting all the invariances together, using the decomposition of the identity to decompose the norms we get, because of log-concavity : (Lu) 2 dµ D 2 u 2 dµ = c i P Ei D 2 up Ei 2 dµ 1 c i c p (x, E i ) P E i u(x) 2 dµ.

Consequences Recall the result: for f : R n R such that f R i = f for all i m, we have ( ) m Var µ (f) c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1

Consequences Recall the result: for f : R n R such that f R i = f for all i m, we have ( ) m Var µ (f) c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1 Variance estimates (i.e. for f(x) = x 2 ) for log-concave distributions. We can use a result of Bobkov/KLS to control spectral gap of the conditioned measures: : c p (ν) c x 2 dν(x). Example of result : If µ is log-concave isotropic and verifies (HYP) then Var µ ( x 2 ) C m i=1 c i dim(e i ) 2 C n max i m dim(e i) If µ is log-concave (isotropic) satisfies (HYP ) then µ satisfies the variance conjecture : Var µ ( x 2 ) C n.

Consequences Recall the result: for f : R n R such that f R i = f for all i m, we have ( ) m Var µ (f) c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1 What about spectral gap? For this, we present two ways of proceeding. One can make a symmetrization argument to extend our inequality to every function f. This requires to use the spectral gap for the Cayley graph given by the group of isometries and a family of generators. Analyze the invariance properties of eigenfunctions.

Examples of results for spectral gap

Examples of results for spectral gap Theorem Let X µ be a log-concave measure and R 1,..., R m O n (µ) such that i m Fix(R i) = {0}. Then, setting E i = Fix(R i ), we have c p (µ) max i m sup c p (µ x,ei ) c max E [ P Ei X 2 P ] x Fix(R i) i m Fix(Ri)X.

Examples of results for spectral gap Theorem Let X µ be a log-concave measure and R 1,..., R m O n (µ) such that i m Fix(R i) = {0}. Then, setting E i = Fix(R i ), we have c p (µ) max i m sup c p (µ x,ei ) c max E [ P Ei X 2 P ] x Fix(R i) i m Fix(Ri)X. Then, one can use the deep result a E. Milman on stability of spectral gap, and cut-off to a bounded convex domain. Problem : we may loose the invariance. We also need to control the number of isometries we need. Theorem Under the same assumptions, suppose also that each R i acts as a permutation on the set {E 1,..., E m }. Then, c p (µ) c log(m) 2 max i m dim(e i). Theorem If S u1,..., S um O n (µ) such that Fix(S uj ) = {0}. Then, c p (µ) c log(n) 2.

Examples of results for spectral gap In the sequel, we will use the following particular case of the result : Theorem Let X µ is a log-concave measure and if S u1,..., S um O n (µ) reflections such that Fix(S uj ) = {0}, then c p (µ) max i m sup x u i c p (µ x,rui )

Conditioned spin system

Conditioned spin system Here µ is probability measure on R having a spectral gap.

Conditioned spin system Here µ is probability measure on R having a spectral gap. Then c p (µ n ) = c p (µ).

Conditioned spin system Here µ is probability measure on R having a spectral gap. Then c p (µ n ) = c p (µ). For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1

Conditioned spin system Here µ is probability measure on R having a spectral gap. Then c p (µ n ) = c p (µ). For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.)

Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R.

Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R. Easy : if V c > 0, then we have the Bakry-Emery criterion for µ nρ.

Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R. Easy : if V c > 0, then we have the Bakry-Emery criterion for µ nρ. Much more difficult: Assume µ has density e φ ψ. Landim-Panuzi-Yao (Chafai, Grunewald-Otto-Reznikoff-Villani): if φ(x) = x 2 /2 and ψ < +, ψ < +, then OK: c p (µ n,ρ ) < +. sup n,ρ Caputo ( 03). If φ c > 0 and and ψ < +, ψ < +, AND moreover we have some other growth condition on ψ, then OK: sup c p (µ n,ρ ) < +. n,ρ

Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R. Easy : if V c > 0, then we have the Bakry-Emery criterion for µ nρ. Much more difficult: Assume µ has density e φ ψ. Landim-Panuzi-Yao (Chafai, Grunewald-Otto-Reznikoff-Villani): if φ(x) = x 2 /2 and ψ < +, ψ < +, then OK: sup n,ρ c p (µ n,ρ ) < +. Caputo ( 03). If φ c > 0 and and ψ < +, ψ < +, AND moreover we have some other growth condition on ψ, then OK: sup c p (µ n,ρ ) < +. n,ρ This case was recently completely solved by

Our results

Our results Theorem Assume µ = e V sup n,ρ is log-concave. Then c p (µ n,ρ ) sup c p (µ 2,ρ ) ρ sup a R e [ V (a+t)+v (a V ) 2V (a) ] dt.

Our results Theorem Assume µ = e V sup n,ρ is log-concave. Then c p (µ n,ρ ) sup c p (µ 2,ρ ) ρ sup a R [ ] e V (a+t)+v (a V ) 2V (a) dt. Example of application : V (a + t) + V (a V ) 2V (a) c(t) with e c(t) dt < +. For instance : V (t) = t β + ψ(t) with β 2 and ψ small. Gives also counter-examples for V (t) = t β with β < 2.

Our results Theorem Assume µ = e V sup n,ρ is log-concave. Then c p (µ n,ρ ) sup c p (µ 2,ρ ) ρ sup a R [ ] e V (a+t)+v (a V ) 2V (a) dt. Example of application : V (a + t) + V (a V ) 2V (a) c(t) with e c(t) dt < +. For instance : V (t) = t β + ψ(t) with β 2 and ψ small. Gives also counter-examples for V (t) = t β with β < 2. Theorem Assume µ = e c(x2 ) ψ(x) with c convex and ψ small. Then c p (µ n,ρ ) < + sup n,ρ

Argument

Argument The density of µ n is e V (x1) V (x2)...v (xn) and so its restriction to xi = ρ has the symmetries of the simplex. So we know that, setting u i,j = ei ej 2 we have c p (µ n,ρ ) c sup i j sup c p ((µ n,ρ ) z,rui,j ) z u i,j

Argument The density of µ n is e V (x1) V (x2)...v (xn) and so its restriction to xi = ρ has the symmetries of the simplex. So we know that, setting u i,j = ei ej 2 we have c p (µ n,ρ ) c sup i j sup z u i,j c p ((µ n,ρ ) z,rui,j ) Fix z u 1,2. The density of (µ n,ρ ) z,ru1,2 on z + Ru 1,2 is V (z1+ t e 2 ) V (z 1 t 2 ) V (z 3)... V (z n), renormalized. So we have d(µ n,ρ ) z,ru1,2 = e V (z1+ t 2 ) V (z 1 t 2 ) dt. V (z e 1+ 2 ) V (z 1 2 ) So indeed, it is of the form µ 2,z1.

Argument

Argument Estimation of c p (µ 2,ρ ).

Argument Estimation of c p (µ 2,ρ ). We are working with measures on R of the form V (a+t) V (a t) dt dν(t) = e Z. These are even log-concave measures on R, and by a result of Bobkov : c p (ν) t 2 dν(t).

Argument Estimation of c p (µ 2,ρ ). We are working with measures on R of the form V (a+t) V (a t) dt dν(t) = e Z. These are even log-concave measures on R, and by a result of Bobkov : c p (ν) t 2 dν(t). Classical fact for f : R R + even and log-concave : 1 12 f(0)2 t 2 f(t) dt ( ) 3 1 f 2.

Argument Estimation of c p (µ 2,ρ ). We are working with measures on R of the form V (a+t) V (a t) dt dν(t) = e Z. These are even log-concave measures on R, and by a result of Bobkov : c p (ν) t 2 dν(t). Classical fact for f : R R + even and log-concave : 1 12 f(0)2 t 2 f(t) dt ( ) 3 1 f 2. So we have : c p (ν) e [ V (a+t)+v (a t) 2V (a) ] dt.